0% found this document useful (0 votes)
42 views22 pages

Unit 2 CC

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views22 pages

Unit 2 CC

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 22

III IT CCS335-Cloud Computing

UNIT II VIRTUALIZATION BASICS

Virtual Machine Basics – Taxonomy of Virtual Machines – Hypervisor – Key Concepts – Virtualization
structure – Implementation levels of virtualization – Virtualization Types: Full Virtualization – Para
Virtualization – Hardware Virtualization – Virtualization of CPU, Memory and I/O devices.

Virtual Machine Basics:

1. Explain in detail about virtual Machine and its types Virtual

Machine:

Virtual Machine can be defined as an emulation of the computer systems in computing. Virtual
Machine is based on computer architectures. It also gives the functionality of physical computers. The
implementation of VM may consider specialized software, hardware, or a combination of both.
Virtual Machine Basics To understand what a virtual machine is, we must first discuss what is meant by
machine, and, as pointed out earlier, the meaning of “machine” is a matter of perspective. From the perspective of
a process executing a user program, the machine consists of a logical memory address space that has been
assigned to the process, along with user-level registers and instructions that allow the execution of code belonging
to the process.
The I/O part of the machine is visible only through the operating system, and the only way the process
can interact with the I/O system is via operating system calls, often through libraries that execute as part of the
process. Processes are usually transient in nature (although not always). They are created, execute for a period of
time, perhaps spawn other processes along the way, and eventually terminate.
To summarize, the machine, from the prospective of a process, is a combination of the operating system
and the underlying user-level hardware. The ABI provides the interface between the process and the machine.
A system is a full execution environment that can simultaneously support a number of processes
potentially belonging to different users. All the processes share a file system and other I/O resources. The system
environment persists over time (with occasional reboots) as processes come and go. The system allocates physical
memory and I/O resources to the processes and allows the processes to interact with their resources via an OS
that is part of the system. Hence, themachine, from the perspective of a system, is implemented by the
underlying hardware alone, and the ISA provides the interface between the system and the machine.

Fig:2.1 Virtual Machines


III IT CCS335-Cloud Computing
Taxonomy of Virtual Machines:
A Taxonomy We have just described a rather broad array of VMs, with different goals and
implementations. To put them in perspective and organize the common implementation issues, we introduce a
taxonomy illustrated in Figure 2.2.
First, VMs are divided into the two major types: process VMs and system VMs. In the first type, the VM
supports an ABI — user instructions plus system calls; in the second, the VM supports a complete ISA — both
user and system instructions. Finer divisions in the taxonomy are based on whether the guest and host use the
same ISA.
On the left-hand side of Figures are process VMs. These include VMs where the host and guest
instruction sets are the same. In the figure, we identify two examples. The first is multiprogrammed systems, as
already supported on most of today’s systems. The second is same- ISA dynamic binary optimizers, which
transform guest instructions only by optimizing them and then execute them natively. For process VMs where the
guest and host ISAs are different, we also give two examples. These are dynamic translators and HLL VMs. HLL
VMs are connected to the VM taxonomy via a “dotted line” because their process-level interface is at a different,
higher level than the other process VMs. On the right-hand side of the figure are system VMs. If the guest and
host use the same ISA, examples include “classic” system VMs and hosted VMs. In these VMs, the objective is
providing replicated, isolated system environments.
The primary difference between classic and hosted VMs is the VMM implementation rather than the
function provided to the user. Examples of system VMs where the guest and host ISAs are different include
whole-system VMs and code signed VMs. With whole-system VMs, performance is often of secondary
importance compared to accurate functionality, while with code signed VMs, performance (and power efficiency)
are often major goals. In the figure, code signed VMs are connected using dotted lines because their interface is
typically at a lower level than other system VMs.

Fig 2.2 A Taxonomy of Virtual Machines

Types of Virtual Machines : You can classify virtual machines into two types:
III
1. IT CCS335-Cloud
System Virtual Machine: These types of virtual machines gives us complete systemComputing
platform and gives the
execution of the complete virtual operating system. Just like virtual box, system virtual machine is providing an
environment for an OS to be installed completely. We can see in below image that our hardware of Real Machine
is being distributed between two simulated operating systems by Virtual machine monitor. And then some
programs, processes are going on in that distributed hardware of simulated machines separately.

Process Virtual Machine : While process virtual machines, unlike system virtual machine, does not provide us
with the facility to install the virtual operating system completely. Rather it creates virtual environment of that OS
while using some app or program and this environment will be destroyed as soon as we exit from that app. Like in
below image, there are some apps running on main OS as well some virtual machines are created to run other apps.
This shows that as those programs required different OS, process virtual machine provided them with that for the
time being those programs are running. Example – Wine software in Linux helps to run Windows applications.
Virtual Machine Language : It’s type of language which can be understood by different operating systems. It is
platform-independent. Just like to run any programming language (C, python, or java) we need specific compiler
that actually converts that code into system understandable code (also known as byte code). The same virtual
machine language works. If we want to use code that can be executed on different types of operating systems like
(Windows, Linux, etc) then virtual machine language will be helpful.

Hypervisor
A hypervisor is a form of virtualization software used in Cloud hosting to divide and allocate the resources on
various pieces of hardware. The program which provides partitioning, isolation, or abstraction is called a
III IT
virtualization CCS335-Cloud
hypervisor. The hypervisor is a hardware virtualization technique Computing
that allows multiple guest operating
systems (OS) to run on a single host system at the same time. A hypervisor is sometimes also called a virtual
machine manager(VMM).

Types of Hypervisor -
TYPE-1 Hypervisor:
The hypervisor runs directly on the underlying host system. It is also known as a "Native Hypervisor" or "Bare
metal hypervisor". It does not require any base server operating system. It has direct access to hardware resources.
Examples of Type 1 hypervisors include VMware ESXi, Citrix XenServer, and Microsoft Hyper-V hypervisor.

Pros & Cons of Type-1 Hypervisor:


Pros: Such kinds of hypervisors are very efficient because they have direct access to the physical hardware
resources(like Cpu, Memory, Network, and Physical storage). This causes the empowerment of the security
because there is nothing any kind of the third party resource so that attacker couldn't compromise with anything.
Cons: One problem with Type-1 hypervisors is that they usually need a dedicated separate machine to perform their
operation and to instruct different VMs and control the host hardware resources.

TYPE-2 Hypervisor:
A Host operating system runs on the underlying host system. It is also known as 'Hosted Hypervisor". Such kind of
hypervisors doesn’t run directly over the underlying hardware rather they run as an application in a Host
system(physical machine). Basically, the software is installed on an operating system. Hypervisor asks the
operating system to make hardware calls. An example of a Type 2 hypervisor includes VMware Player or Parallels
Desktop. Hosted hypervisors are often found on endpoints like PCs. The type-2 hypervisor is very useful for
engineers, and security analysts (for checking malware, or malicious source code and newly developed
applications).
Pros & Cons of Type-2 Hypervisor:
Pros: Such kind of hypervisors allows quick and easy access to a guest Operating System alongside the host
machine running. These hypervisors usually come with additional useful features for guest machines. Such tools
enhance the coordination between the host machine and the guest machine.
Cons: Here there is no direct access to the physical hardware resources so the efficiency of these hypervisors lags
in performance as compared to the type-1 hypervisors, and potential security risks are also there an attacker can
compromise the security weakness if there is access to the host operating system so he can also access the guest
operating system.

Choosing the right hypervisor :


Type 1 hypervisors offer much better performance than Type 2 ones because there’s no middle layer, making them
the logical choice for mission-critical applications and workloads. But that’s not to say that hosted hypervisors
don’t have their place – they’re much simpler to set up, so they’re a good bet if, say, you need to deploy a test
environment quickly. One of the best ways to determine which hypervisor meets your needs is to compare their
performance metrics. These include CPU overhead, the amount of maximum host and guest memory, and support
for virtual processors. The following factors should be examined before choosing a suitable hypervisor:
1. Understand your needs: The company and its applications are the reason for the data center (and your job).
Besides your company's needs, you (and your co-workers in IT) also have your own needs. Needs for a
virtualization hypervisor are:
a. Flexibility
b. Scalability
c. Usability
d. Availability
e. Reliability
f. Efficiency
g. Reliable support

2. The cost of a hypervisor: For many buyers, the toughest part of choosing a hypervisor is striking the right
balance between cost and functionality. While a number of entry-level solutions are free, or practically free, the
prices at the opposite end of the market can be staggering. Licensing frameworks also vary, so it’s important to
be aware of exactly what you’re getting for your money.
III IT3. Virtual machine performance: Virtual systems should meet or exceed
CCS335-Cloud Computing
the performance of their physical
counterparts, at least in relation to the applications within each server. Everything beyond meeting this
benchmark is profit.

4. Ecosystem: It’s tempting to overlook the role of a hypervisor’s ecosystem – that is, the availability of
documentation, support, training, third-party developers and consultancies, and so on – in determining whether
or not a solution is cost-effective in the long term.

5. Test for yourself: You can gain basic experience from your existing desktop or laptop. You can run both
VMware vSphere and Microsoft Hyper-V in either VMware Workstation or VMware Fusion to create a nice
virtual learning and testing environment.

VIRTIALIZATION STRUCTURE:
2. Explain in detail about Hardware based Virtualization.(or)Give the Virtualization Structure and
Explain the various types of Virtualization.(May-2023)

Each instance of operating system called Virtual Machine (VM) and operating system runs inside virtual
machine is called guest operating system. Depending on the position of the virtualization layer, there are two
classes of VM architectures, namely the hypervisor architectures like bare-metal or host- based. The
hypervisor is the software used for doing virtualization also known as the VMM (Virtual Machine Monitor).
The hypervisor software provides two different structures of Virtualization namely Hosted structure (also
called Type 2 Virtualization) and Bare-Metal structure (also called Type 1 Virtualization) .

Hosted Structure (Type II)(Hypervisor)


In hosted structure, the guest OS and applications run on the top of base or host OS with the help of
VMM (called Hypervisor). The VMM stays between the base OS and guest OS. This approach provides better
compatibility of hardware because the base OS is
responsible for providing hardware drivers to guest OS instead of the VMM. In this type, hypervisor has to rely
on host OS for pass through permissions to access hardware. In many cases, hosted hypervisor needs emulator,
which lies between guest OS and VMM to translate the instructions in native format. The hosted structure is
shown in Fig. 2.2.1.
III IT CCS335-Cloud Computing

Fig. 2.2.1 Hosted Structure (Type II Hypervisor)

To implement Hosted structure, a base OS needs to be installed first over which VMM can be installed. The
hosted structure is simple solution to run multiple desktop OS independently. Fig.
2.2.2 (a) and (b) shows Windows running on Linux base OS and Linux running on Windows base OS using
hosted Hypervisor

Fig. 2.2.2 Hosted Hypervisors

The popular hosted hypervisors are QEMU, VMware Workstation, Microsoft Virtual PC, Oracle
VirtualBox etc.

The advantages of hosted structure are

 It is easy to install and manage without disturbing host systems hardware.


 It supports legacy operating systems and applications.
 It provides ease of use with greater hardware compatibility.
 It does not require to install any drivers for IO devices as they are installed through built- in driver
III IT stack. CCS335-Cloud Computing

 It can be used for testing beta software.


 The hosted hypervisors are usually free software and can be run on user
workstations.
The disadvantages of hosted structure are

 It does not allow guest OS to directly access the hardware instead it has to go through
base OS, which increases resource overhead.
 It has very slow and degraded virtual machines performance due to relying on intermediate
host OS for getting hardware access.

 It doesn’t scale up beyond the limit.

Bare-Metal Structure (Type I)(or) Naïve Bare Metal Structure:

 In Bare-Metal Structure, the VMM can be directly installed on the top of Hardware, therefore no
intermediate host OS is needed. The VMM can directly communicate with the hardware and does not
rely on the host system for pass through permission which results in better performance,
scalability and stability. The Bare-Metal structure is shown in Fig.
2.2.3. (See Fig. 2.2.3 on next page).
 Bare-metal virtualization is mostly used in enterprise data centers for getting the advanced features like
resource pooling, high availability, disaster recovery and security.

Fig. 2.2.3 Bare-Metal Structure (Type-I Hypervisor)


III IT CCS335-Cloud Computing

Fig. 2.2.4 Bare-Metal Xen Server Hypervisor

The popular Bare-Metal Hypervisors are Citrix Xen Server, VMware ESXI and Microsoft Hyper
V.

The advantages of Bare-Metal structure are

 It is faster in performance and more efficient to use.


 It provides enterprise features like high scalability, disaster recovery and high availability.
 It has high processing power due to the resource pooling.

Implementation Levels of Virtualization

3. Discuss in detail about the categories of hardware virtualization depending on implementation


technologies. Nov/Dec 2021(or)Discuss how Virtualization implemented in different layers of cloud
in detail.(May-2022)

The virtualization is implemented at various levels by creating a software abstraction layer between host
OS and Guest OS. The main function of software layer is to virtualize physical hardware of host machine
in to virtual resources used by VMs by using various operational layers. The different levels at which the
virtualization can be implemented is shown in Fig. 2.3.1. There are five implementation levels of
virtualization, that are Instruction Set Architecture (ISA) level, Hardware level, Operating System level,
Library support level and Application level which are explained as follows.

1) Instruction Set Architecture Level


 Virtualization at the instruction set architecture level is implemented by emulating an instruction set
architecture completely on software stack. An emulator tries to execute instructions issued by the guest
machine (the virtual machine that is being emulated) by translating them to a set of native instructions
III IT and then executing them on the available hardware. CCS335-Cloud Computing

Fig. 2.3.1 Implementation Levels of Virtualization

 That is emulator works by translating instructions from the guest platform to instructions of the host
platform. These instructions would include both processor oriented (add, sub, jump etc.), and the I/O
specific (IN/OUT) instructions for the devices. Although this virtual machine architecture works fine
in terms of simplicity and robustness, it has its own pros and cons.

 The advantages of ISA are, it provides ease of implementation while dealing with multiple platforms
and it can easily provide infrastructure through which one can create virtual machines based on x86
platforms such as Sparc and Alpha. The disadvantage of ISA is since every instruction issued by the
emulated computer needs to be interpreted in software first which degrades the performance.

 The popular emulators of ISA level virtualization are :

a) Boochs

It is a highly portable emulator that can be run on most popular platforms that include x86, PowerPC,
Alpha, Sun, and MIPS. It can be compiled to emulate most of the versions of x86 machines including 386,
486, Pentium, Pentium Pro or AMD64 CPU, including optional MMX, SSE, SSE2, and 3DNow instructions.

b) QEMU

QEMU (Quick Emulator) is a fast processor emulator that uses a portable dynamic translator. It supports
two operating modes: user space only, and full system emulation. In the earlier mode, QEMU can launch
Linux processes compiled for one CPU on another CPU, or for cross- compilation and cross-debugging. In the
later mode, it can emulate a full system that includes a processor and several peripheral devices. It supports
IIIemulation
IT of a number of processor architectures that includes x86, ARM,CCS335-Cloud
PowerPC, andComputing
Sparc.

c) Crusoe

The Crusoe processor comes with a dynamic x86 emulator, called code morphing engine that can execute
any x 86 based application on top of it. The Crusoe is designed to
handle the x86 ISA’s precise exception semantics without constraining speculative scheduling. This is
accomplished by shadowing all registers holding the x86 state.
d) BIRD

BIRD is an interpretation engine for x86 binaries that currently supports only x86 as the host ISA and aims
to extend for other architectures as well. It exploits the similarity between the architectures and tries to
execute as many instructions as possible on the native hardware. All other instructions are supported
through software emulation.

2) Hardware Abstraction Layer


 Virtualization at the Hardware Abstraction Layer (HAL) exploits the similarity in architectures of the
guest and host platforms to cut down the interpretation latency. The time spent in instruction
interpretation of guest platform to host platform is reduced by taking the similarities exist between
them Virtualization technique helps map the virtual resources to physical resources and use the native
hardware for computations in the virtual machine. This approach generates a virtual hardware
environment which virtualizes the computer resources like CPU, Memory and IO devices.
 For the successful working of HAL the VM must be able to trap every privileged instruction execution
and pass it to the underlying VMM, because multiple VMs running own OS might issue privileged
instructions need full attention of CPU’s .If it is not managed properly then VM may issues trap rather
than generating an exception that makes crashing of instruction is sent to the VMM. However, the most
popular platform, x86, is not fully-virtualizable, because it is been observed that certain privileged
instructions fail silently rather than trapped when executed with insufficient privileges. Some of the
popular HAL virtualization tools are

a) VMware
The VMware products are targeted towards x86-based workstations and servers. Thus, it has to deal with
the complications that arise as x86 is not a fully-virtualizable architecture. The VMware deals with this
problem by using a patent-pending technology that dynamically rewrites portions of the hosted machine code
to insert traps wherever VMM intervention is required. Although it solves the problem, it adds some overhead
due to the translation and execution costs. VMware tries to reduce the cost by caching the results and reusing
them wherever possible. Nevertheless, it again adds some caching cost that is hard to avoid.

b) Virtual PC
III IT The Microsoft Virtual PC is based on the Virtual Machine Monitor CCS335-Cloud Computing
(VMM) architecture that lets user to
create and configure one or more virtual machines. It provides most of the functions same as VMware but
additional functions include undo disk operation that lets the user easily undo some previous operations on
the hard disks of a VM. This enables easy data recovery and might come handy in several circumstances.

c) Denali

The Denali project was developed at University of Washington’s to address this issue related to scalability
of VMs. They come up with a new virtualization architecture also called Para virtualization to support
thousands of simultaneous machines, which they call Lightweight Virtual Machines. It tries to increase the
scalability and performance of the Virtual Machines without too much of implementation complexity.

3) Operating System Level Virtualization

 The operating system level virtualization is an abstraction layer between OS and user applications. It
supports multiple Operating Systems and applications to be run simultaneously without required to
reboot or dual boot. The degree of isolation of each OS is very high and can be implemented at low risk
with easy maintenance. The implementation of operating system level virtualization includes, operating
system installation, application suites installation, network setup, and so on. Therefore, if the required
OS is same as the one on the physical machine then the user basically ends up with duplication of most
of the efforts, he/she has already invested in setting up the physical machine. To run applications
properly the operating system keeps the application specific data structure, user level libraries,
environmental settings and other requisites separately.
 The key idea behind all the OS-level virtualization techniques is virtualization layer above the OS
produces a partition per virtual machine on demand that is a replica of the operating environment on the
physical machine. With a careful partitioning and multiplexing technique, each VM can be able to
export a full operating environment and fairly isolated from one another and from the underlying
physical machine.
 The popular OS level virtualization tools are

a) Jail

The Jail is a FreeBSD based virtualization software that provides the ability to partition an operating
system environment, while maintaining the simplicity of UNIX ”root”
model. The environments captured within a jail are typical system resources and data structures such as
processes, file system, network resources, etc. A process in a partition is referred to as “in jail” process. When
the system is booted up after a fresh install, no processes will be in jail. When a process is placed in a jail, all
of its descendants after the jail creation, along with itself, remain within the jail. A process may not belong to
more than one jail. Jails are created by a privileged process when it invokes a special system call jail. Every
call to jail creates a new jail; the only way for a new process to enter the jail is by inheriting access to the
IIIjail
IT from another process that already in that jail. CCS335-Cloud Computing

b) Ensim

The Ensim virtualizes a server’s native operating system so that it can be partitioned into isolated
computing environments called virtual private servers. These virtual private servers operate independently of
each other, just like a dedicated server. It is commonly used in creating hosting environment to allocate
hardware resources among large number of distributed users.

4) Library Level Virtualization

Most of the system uses extensive set of Application Programmer Interfaces (APIs) instead of legacy
System calls to implement various libraries at user level. Such APIs are designed to hide the operating system
related details to keep it simpler for normal programmers. In this technique, the virtual environment is created
above OS layer and is mostly used to implement different Application Binary Interface (ABI) and Application
Programming Interface (API) using the underlying system.
The example of Library Level Virtualization is WINE. The Wine is an implementation of the Windows
API, and can be used as a library to port Windows applications to UNIX. It is a virtualization layer on top of X
and UNIX to export the Windows API/ABI which allows to run Windows binaries on top of it.

5) Application Level Virtualization

In this abstraction technique the operating systems and user-level programs executes like applications
for the machine. Therefore, specialize instructions are needed for hardware manipulations like I/O
mapped (manipulating the I/O) and Memory mapped (that is mapping a chunk of memory to the I/O and
then manipulating the memory). The group of such special instructions constitutes the application called
Application level Virtualization. The Java Virtual Machine (JVM) is the popular example of application level
virtualization which allows creating a virtual machine at the application-level than OS level. It supports a
new self-defined set of instructions called java byte codes for JVM. Such VMs pose little security threat to
the system while letting the user to play with it like physical machines. Like physical machine it has to provide
an operating environment to its applications either by hosting a commercial operating system, or by coming up
with its own environment.
The comparison between different levels of virtualization is shown in Table 2.4.1.

Implementation Level Performance Application Implementation Application


Flexibility Complexity Isolation
Instruction Set Very Poor Very Good Medium Medium
Architecture Level (ISA)
Hardware Abstraction Very Good Medium Very Good Good
Level (HAL)
III IT CCS335-Cloud Computing
Operating System Level Very Good Poor Medium Poor

Library Level Medium Poor Poor Poor


Application Level Poor Poor Very Good Very Good

Table 2.4.1 Comparison between different implementation levels of virtualization

4. What are different Mechanisms of Virtualizations?


Virtualization Mechanisms or Types of Virtualization

Every hypervisor uses some mechanisms to control and manage virtualization strategies that allow
different operating systems such as Linux and Windows to be run on the same physical machine,
simultaneously. Depending on the position of the virtualization layer, there are several classes of VM
mechanisms, namely the binary translation, para-virtualization, full virtualization, hardware assist
virtualization and host-based virtualization. The mechanisms of virtualization defined by VMware and other
virtualization providers are explained as follows.

Binary Translation with Full Virtualization:


Based on the implementation technologies, hardware virtualization can be characterized into two
types namely full virtualization with binary translation and host- based virtualization. The binary translation
mechanisms with full and host-based virtualization are explained as follows.

a) Binary translation

In Binary translation of guest OS, The VMM runs at Ring 0 and the guest OS at Ring 1. The VMM checks
the instruction stream and identifies the privileged, control and behavior-sensitive instructions. At the point
when these instructions are identified, they are trapped into the VMM, which emulates the behavior of
these instructions. The method used in this emulation is called binary translation. The binary translation
mechanism is shown in Fig. 2.5.3.
III IT CCS335-Cloud Computing

Fig. 2.5.3 Binary Translation mechanism

b) Full Virtualization

In full virtualization, host OS doesn’t require any modification to its OS code. Instead it relies on binary
translation to virtualize the execution of some sensitive, non-virtualizable instructions or execute trap. Most of
the guest operating systems and their applications composed of critical and noncritical instructions. These
instructions are executed with the help of binary translation mechanism.

With full virtualization, noncritical instructions run on the hardware directly while critical instructions are
discovered and replaced with traps into the VMM to be emulated by software. In a host- based virtualization,
both host OS and guest OS takes part in virtualization where virtualization software layer lies between them.
Therefore, full virtualization works with binary translation to perform direct execution of instructions
where guest OS is completely decoupled from the underlying hardware and consequently, it is unaware that it
is being virtualized. The full virtualization gives degraded performance, because it involves binary translation
of instructions first rather than executing which is rather time-consuming. Specifically, the full virtualization
of I/O intensive applications is a really a big challenge as Binary translation employs a code cache to store
translated instructions to improve performance, however it expands the cost of memory usage.

c) Host-based virtualization

In host-based virtualization, the virtualization layer runs on top of the host OS and guest OS runs over
the virtualization layer. Therefore, host OS is responsible for managing the hardware and control the
instructions executed by guest OS.

The host- based virtualization doesn’t require to modify the code in host OS but virtualization software has
to rely on the host OS to provide device drivers and other low-level services. This architecture simplifies the
VM design with ease of deployment but gives degraded performance compared to other hypervisor
architectures because of host OS interventions.

The host OS performs four layers of mapping during any IO request by guest OS or VMM which
downgrades performance significantly.

Para-Virtualization
The para-virtualization is one of the efficient virtualization techniques that require explicit modification to
the guest operating systems. The APIs are required for OS modifications in user applications which are
provided by para-virtualized VM.
In some of the virtualized system, performance degradation becomes the critical issue. Therefore, para-
virtualization attempts to reduce the virtualization overhead, and thus improve performance by modifying
only the guest OS kernel. The para-virtualization architecture is shown in Fig. 2.5.4.
III IT CCS335-Cloud Computing

Fig. 2.5.4 Para-virtualization architecture


The x86 processor uses four instruction execution rings namely Ring 0, 1, 2, and 3. The ring 0 has higher
privilege of instruction being executed while Ring 3 has lower privilege. The OS is responsible for managing
the hardware and the privileged instructions to execute at Ring 0, while user-level applications run at Ring 3.
The KVM hypervisor is the best example of para- virtualization. The functioning of para-virtualization is
shown in Fig. 2.5.5.

Fig. 2.5.5 Para-virtualization (Source : VMware)


In para-virtualization, virtualization layer is inserted between the hardware and the OS. As x86 processor
requires virtualization layer should be installed at Ring 0, the other instructions at Ring 0 may cause some
problems. In this architecture, the nonvirtualizable instructions are replaced with hypercalls that communicate
directly with the hypervisor or VMM. The user applications directly get executed upon user request on host
system hardware.

Some disadvantages of para-virtualization are although para-virtualization reduces CPU overhead, but still
has many issues with compatibility and portability of virtual system, it incurs high cost for implementation
and maintenance and performance of virtualization varies due to workload variation. The popular examples
of para- virtualization are Xen, KVM, and VMware ESXi.
IIIa)
IT Para-Virtualization with Compiler Support CCS335-Cloud Computing

The para-virtualization supports privileged instructions to be executed at run time. As full virtualization
architecture executes the sensitive privileged instructions by intercepting and emulating them at
runtime, para-virtualization can handle such instructions at compile time. In Para-Virtualization with Compiler
Support thee guest OS kernel is modified to replace the privileged and sensitive instructions with hypercalls to
the hypervisor or VMM at compile time itself. The Xen hypervisor assumes such para-virtualization
architecture.
Here, guest OS running in a guest domain may run at Ring 1 instead of at Ring 0 that’s why guest OS may
not be able to execute some privileged and sensitive instructions. Therefore, such privileged instructions are
implemented by hypercalls to the hypervisor. So, after replacing the instructions with hypercalls, the modified
guest OS emulates the behavior of the original guest OS.
Virtualization of CPU, Memory, And I/O Devices
5. Explain in detail about Virtualization of CPU, Memory, And I/O Devices.( Nov/Dec 2021)
Virtualization of CPU

The CPU Virtualization is related to range protection levels called rings in which code can execute. The
Intel x86 architecture of CPU offers four levels of privileges known as Ring 0, 1, 2 and 3.

Fig. 2.6.1 CPU Privilege Rings


Among that Ring 0, Ring 1 and Ring 2 are associated with operating system while Ring
3 is reserved for applications to manage access to the computer hardware. As Ring 0 is used by kernel
because of that Ring 0 has the highest-level privilege while Ring 3 has lowest privilege as it belongs to user
level application shown in Fig. 2.6.1. The user level applications typically run in Ring 3, the operating system
needs to have direct access to the memory and hardware and must execute its privileged instructions in Ring 0.
Therefore, Virtualizingx86 architecture requires placing a virtualization layer under the operating system to
create and manage the virtual machines that delivers shared resources. Some of the sensitive instructions can’t
be virtualized as they have different semantics. If virtualization is not provided then there is a difficulty in
trapping and translating those sensitive and privileged instructions at runtime which become the challenge.
IIIThe
IT x86 privilege level architecture without virtualization is shown CCS335-Cloud
in Fig. 2.6.2. Computing

Fig. 2.6.2 X86 privilege level architecture without virtualization


In most of the virtualization system, majority of the VM instructions are executed on the host processor
in native mode. Hence, unprivileged instructions of VMs can run directly on the host machine for higher
efficiency.
The privileged instructions are executed in a privileged mode and get trapped if executed outside this
mode. The control-sensitive instructions allow to change the configuration of resources used during execution
while Behavior-sensitive instructions uses different behaviors of CPU depending on the configuration of
resources, including the load and store operations over the virtual memory.
Generally, the CPU architecture is virtualizable if and only if it provides ability to run the VM’s
privileged and unprivileged instructions in the CPU’s user mode during which VMM runs in supervisor
mode. When the privileged instructions along with control and behavior- sensitive instructions of a VM are
executed, then they get trapped in the VMM. In such scenarios, the VMM becomes the unified mediator for
hardware access from different VMs and guarantee the correctness and stability of the whole system.
However, not all CPU architectures are virtualizable. There are three techniques can be used for handling
sensitive and privileged instructions to virtualize the CPU on the x86 architecture :
1. Binary translation with full virtualization
2. OS assisted virtualization or para-virtualization
3. Hardware assisted virtualization

The above techniques are explained in detail as follows.


Binary translation with full virtualization

In binary translation, the virtual machine issues privileged instructions contained within their compile code.
The VMM takes control on these instructions and changes the code under execution to avoid the impact on state
of the system. The full virtualization technique does not need to modify host operating system. It relies on
binary translation to trap and virtualize the execution of certain instructions.
The noncritical instructions directly run on the hardware while critical instructions have to be discovered first
then they are replaced with
III IT CCS335-Cloud Computing

Fig. 2.6.3 Binary Translation with Full Virtualization


trap in to VMM to be emulated by software. This combination of binary translation and direct execution
provides full virtualization as the guest OS is completely decoupled from the underlying hardware by the
virtualization layer. The guest OS is not aware that it is being virtualized and requires no modification. The
performance of full virtualization may not be ideal because it involves binary translation at run-time which is
time consuming and can incur a large performance overhead. Full virtualization offers the best isolation and
security for virtual machines, and simplifies migration and portability as the same guest OS instance can run
virtualized or on native hardware. The full virtualization is only supported by VMware and Microsoft’s
hypervisors. The binary translation with full virtualization is shown in Fig. 2.6.3.

2) OS assisted virtualization or para-virtualization


The para-virtualization technique refers to making communication between guest OS and the hypervisor to
improve the performance and efficiency. The para-virtualization involves modification to the OS kernel that
replaces the non-virtualized instructions with hypercalls and can communicate directly with the virtualization
or layer hypervisor. A hypercall is based on the same concept as a system call. The call made by hypervisor to
the hardware is called hypercall. In para-virtualization the hypervisor is responsible for providing hypercall
interfaces for other critical kernel operations such as memory management, interrupt handling and time
keeping. Fig. 2.6.4 shows para-virtualization.

Fig. 2.6.4 Para-virtualization

3) Hardware Assisted Virtualization (HVM)

This technique attempts to simplify virtualization because full or para-virtualization is complicated in


nature. The Processor makers like Intel and AMD provides their own proprietary CPU Virtualization
IIITechnologies
IT called Intel VT-x and AMD-V. Intel and AMD CPUs addCCS335-Cloud
an additionalComputing
mode called privilege
mode level to x86 processors. All the privileged and sensitive instructions are trapped in the hypervisor
automatically. This technique removes the difficulty of implementing binary translation of full virtualization. It
also lets the operating system run in VMs without modification. Both of them target privileged instructions
with a new CPU execution mode feature that allows the VMM to run in a new root mode below ring 0,
also referred to as Ring 0P (for privileged root mode) while the Guest OS runs in Ring 0D (for de-
privileged non-root mode). The Privileged and sensitive calls are set automatically to trap the hypervisor
running on hardware that removes the need for either binary translation or para-virtualization. The Fig. 2.6.5
shows Hardware Assisted Virtualization.

Fig. 2.6.5 Hardware Assisted Virtualization

Virtualization Of Memory
6. Explain in detail bout virtualization of memory with an example.
Virtualization of Memory
The memory virtualization involves physical memory to be shared and dynamically allocated
to virtual machines. In a traditional execution environment, the operating system is responsible for
maintaining the mappings of virtual memory to machine memory using page tables. The page table is a single-
stage mapping from virtual memory to machine memory. All recent x86 CPUs comprises built-in Memory
Management Unit (MMU) and a Translation Lookaside Buffer (TLB) to improve the virtual memory
performance. However, in a virtual execution environment, the mapping is required from virtual memory to
physical memory and physical memory to machine memory; hence it requires two-stage mapping process.
The modern OS provides virtual memory support that is similar to memory virtualization. The Virtualized
memory is seen by the applications as a contiguous address space which is not tied to the underlying
physical memory in the system. The operating system is responsible for mappings the virtual page numbers to
physical page numbers stored in page tables. To optimize the Virtual memory performance all modern x86
CPUs include a Memory Management Unit (MMU) and a Translation Lookaside Buffer (TLB). Therefore, to
run multiple virtual machines with Guest OS on a single system, the MMU has to be virtualized shown in
IIIFig.
IT 2.7.1. CCS335-Cloud Computing

Fig. 2.7.1 Memory Virtualization

The Guest OS is responsible for controlling the mapping of virtual addresses to the guest memory physical
addresses, but the Guest OS cannot have direct access to the actual machine memory. The VMM is
responsible for mapping the Guest physical memory to the actual machine memory, and it uses shadow page
tables to accelerate the mappings. The VMM uses TLB (Translation Lookaside Buffer) hardware to map the
virtual memory directly to the machine memory to avoid the two levels of translation on every access. When
the guest OS changes the virtual memory to physical memory mapping, the VMM updates the shadow page
tables to enable a direct lookup. The hardware-assisted memory virtualization by AMD processor provides
hardware assistance to the two-stage address translation in a virtual execution environment by
using a technology called nested paging.

Virtualization of I/O Device:

The virtualization of devices and I/O’s is bit difficult than CPU virtualization. It involves managing
the routing of I/O requests between virtual devices and the shared physical hardware. The software based
I/O virtualization and management techniques can be used for device and I/O virtualization to enables a
rich set of features and simplified management. The network is the integral component of the system
which enables communication between different VMs. The I/O virtualization provides virtual NICs and
switches that create virtual networks between the virtual machines without the network traffic and consuming
bandwidth on the physical network. The NIC teaming allows multiple physical NICS to be appearing as one
and provides failover transparency for virtual machines. It allows virtual machines to be seamlessly relocated
to different systems using VMware VMotion by keeping their existing MAC addresses. The key for effective
I/O virtualization is to preserve the virtualization benefits with minimum CPU utilization. Fig. 2.7.2 shows
device and I/O virtualization.
III IT CCS335-Cloud Computing

Fig. 2.7.2 Device and I/O virtualization

The virtual devices shown in above Fig. 2.7.2 can be effectively emulate on well- known
hardware and can translate the virtual machine requests to the system hardware. The standardize device
drivers help for virtual machine standardization. The portability in I/O Virtualization allows all the virtual
machines across the platforms to be configured and run on the same virtual hardware regardless of their actual
physical hardware in the system. There are three ways of implementing I/O virtualization. The full device
emulation approach emulates well-known real-world devices where all the functions of device such as
enumeration, identification, interrupt and DMA are replicated in software. The para-virtualization method of
IO virtualization uses split driver model that consist of frontend and backend drivers. The front- end driver
runs on Domain U which manages I/O request of guest OS. The backend driver runs Domain 0 which
manages real I/O devices with multiplexing of I/O data of different VMs. They interact with each other via
block of shared memory. The direct I/O virtualization let the VM to access devices directly.it mainly focus
on networking of mainframes. There are four methods to implement I/O virtualization namely full device
emulation, para- virtualization, and direct I/O virtualization and through self-virtualized I/O.
In full device emulation, the IO devices are virtualized using emulation software. This method can emulate
all well-known and real-world devices. The emulation software is responsible for performing all the functions
of a devices or bus infrastructure, such as device enumeration, identification, interrupts, and DMA which
are replicated. The software runs inside the VMM and acts as a virtual device. In this method, the I/O access
requests of the guest OS are trapped in the VMM which interacts with the I/O devices. The multiple VMs
share a single hardware device for running them concurrently. However, software emulation consumes more
time in IO access that’s why it runs much slower than the hardware it emulates.
In para-virtualization method of I/O virtualization, the split driver model is used which consist of
frontend driver and backend driver. It is used in Xen hypervisor with different drivers like Domain 0 and
Domain U. The frontend driver runs in Domain U while backend driver runs in Domain 0. Both the drivers
IIIinteract
IT CCS335-Cloud
with each other via a block of shared memory. The frontend driver Computing
is responsible for managing the
I/O requests of the guest OSes while backend driver is responsible for managing the real I/O devices and
multiplexing the I/O data of different VMs.
The para-virtualization method of I/O virtualization achieves better device performance than full device
emulation but with a higher CPU overhead.
In direct I/O virtualization, the virtual machines can access IO devices directly. It does not have to rely on
any emulator of VMM. It has capability to give better IO performance without high CPU costs than para-
virtualization method. It was designed for focusing on networking for mainframes.
In self-virtualized I/O method, the rich resources of a multicore processor and harnessed together. The
self-virtualized I/O encapsulates all the tasks related with virtualizing an I/O device. The virtual devices
with associated access API to VMs and a management API to the VMM are provided by self-virtualized
I/O that defines one Virtual Interface (VIF) for every kind of virtualized I/O device.
The virtualized I/O interfaces are virtual network interfaces, virtual block devices (disk), virtual camera
devices, and others. The guest OS interacts with the virtual interfaces via device drivers. Each VIF
carries a unique ID for identifying it in self- virtualized I/O and consists of two message queues. One
message queue for outgoing messages to the devices and another is for incoming messages from the devices.
As there are a many of challenges associated with commodity hardware devices, the multiple IO
virtualization techniques need to be incorporated for eliminating those associated challenges like system crash
during reassignment of IO devices, incorrect functioning of IO devices and high overhead of device
emulation.

You might also like