Unit 2 CC
Unit 2 CC
Virtual Machine Basics – Taxonomy of Virtual Machines – Hypervisor – Key Concepts – Virtualization
structure – Implementation levels of virtualization – Virtualization Types: Full Virtualization – Para
Virtualization – Hardware Virtualization – Virtualization of CPU, Memory and I/O devices.
Machine:
Virtual Machine can be defined as an emulation of the computer systems in computing. Virtual
Machine is based on computer architectures. It also gives the functionality of physical computers. The
implementation of VM may consider specialized software, hardware, or a combination of both.
Virtual Machine Basics To understand what a virtual machine is, we must first discuss what is meant by
machine, and, as pointed out earlier, the meaning of “machine” is a matter of perspective. From the perspective of
a process executing a user program, the machine consists of a logical memory address space that has been
assigned to the process, along with user-level registers and instructions that allow the execution of code belonging
to the process.
The I/O part of the machine is visible only through the operating system, and the only way the process
can interact with the I/O system is via operating system calls, often through libraries that execute as part of the
process. Processes are usually transient in nature (although not always). They are created, execute for a period of
time, perhaps spawn other processes along the way, and eventually terminate.
To summarize, the machine, from the prospective of a process, is a combination of the operating system
and the underlying user-level hardware. The ABI provides the interface between the process and the machine.
A system is a full execution environment that can simultaneously support a number of processes
potentially belonging to different users. All the processes share a file system and other I/O resources. The system
environment persists over time (with occasional reboots) as processes come and go. The system allocates physical
memory and I/O resources to the processes and allows the processes to interact with their resources via an OS
that is part of the system. Hence, themachine, from the perspective of a system, is implemented by the
underlying hardware alone, and the ISA provides the interface between the system and the machine.
Types of Virtual Machines : You can classify virtual machines into two types:
III
1. IT CCS335-Cloud
System Virtual Machine: These types of virtual machines gives us complete systemComputing
platform and gives the
execution of the complete virtual operating system. Just like virtual box, system virtual machine is providing an
environment for an OS to be installed completely. We can see in below image that our hardware of Real Machine
is being distributed between two simulated operating systems by Virtual machine monitor. And then some
programs, processes are going on in that distributed hardware of simulated machines separately.
Process Virtual Machine : While process virtual machines, unlike system virtual machine, does not provide us
with the facility to install the virtual operating system completely. Rather it creates virtual environment of that OS
while using some app or program and this environment will be destroyed as soon as we exit from that app. Like in
below image, there are some apps running on main OS as well some virtual machines are created to run other apps.
This shows that as those programs required different OS, process virtual machine provided them with that for the
time being those programs are running. Example – Wine software in Linux helps to run Windows applications.
Virtual Machine Language : It’s type of language which can be understood by different operating systems. It is
platform-independent. Just like to run any programming language (C, python, or java) we need specific compiler
that actually converts that code into system understandable code (also known as byte code). The same virtual
machine language works. If we want to use code that can be executed on different types of operating systems like
(Windows, Linux, etc) then virtual machine language will be helpful.
Hypervisor
A hypervisor is a form of virtualization software used in Cloud hosting to divide and allocate the resources on
various pieces of hardware. The program which provides partitioning, isolation, or abstraction is called a
III IT
virtualization CCS335-Cloud
hypervisor. The hypervisor is a hardware virtualization technique Computing
that allows multiple guest operating
systems (OS) to run on a single host system at the same time. A hypervisor is sometimes also called a virtual
machine manager(VMM).
Types of Hypervisor -
TYPE-1 Hypervisor:
The hypervisor runs directly on the underlying host system. It is also known as a "Native Hypervisor" or "Bare
metal hypervisor". It does not require any base server operating system. It has direct access to hardware resources.
Examples of Type 1 hypervisors include VMware ESXi, Citrix XenServer, and Microsoft Hyper-V hypervisor.
TYPE-2 Hypervisor:
A Host operating system runs on the underlying host system. It is also known as 'Hosted Hypervisor". Such kind of
hypervisors doesn’t run directly over the underlying hardware rather they run as an application in a Host
system(physical machine). Basically, the software is installed on an operating system. Hypervisor asks the
operating system to make hardware calls. An example of a Type 2 hypervisor includes VMware Player or Parallels
Desktop. Hosted hypervisors are often found on endpoints like PCs. The type-2 hypervisor is very useful for
engineers, and security analysts (for checking malware, or malicious source code and newly developed
applications).
Pros & Cons of Type-2 Hypervisor:
Pros: Such kind of hypervisors allows quick and easy access to a guest Operating System alongside the host
machine running. These hypervisors usually come with additional useful features for guest machines. Such tools
enhance the coordination between the host machine and the guest machine.
Cons: Here there is no direct access to the physical hardware resources so the efficiency of these hypervisors lags
in performance as compared to the type-1 hypervisors, and potential security risks are also there an attacker can
compromise the security weakness if there is access to the host operating system so he can also access the guest
operating system.
2. The cost of a hypervisor: For many buyers, the toughest part of choosing a hypervisor is striking the right
balance between cost and functionality. While a number of entry-level solutions are free, or practically free, the
prices at the opposite end of the market can be staggering. Licensing frameworks also vary, so it’s important to
be aware of exactly what you’re getting for your money.
III IT3. Virtual machine performance: Virtual systems should meet or exceed
CCS335-Cloud Computing
the performance of their physical
counterparts, at least in relation to the applications within each server. Everything beyond meeting this
benchmark is profit.
4. Ecosystem: It’s tempting to overlook the role of a hypervisor’s ecosystem – that is, the availability of
documentation, support, training, third-party developers and consultancies, and so on – in determining whether
or not a solution is cost-effective in the long term.
5. Test for yourself: You can gain basic experience from your existing desktop or laptop. You can run both
VMware vSphere and Microsoft Hyper-V in either VMware Workstation or VMware Fusion to create a nice
virtual learning and testing environment.
VIRTIALIZATION STRUCTURE:
2. Explain in detail about Hardware based Virtualization.(or)Give the Virtualization Structure and
Explain the various types of Virtualization.(May-2023)
Each instance of operating system called Virtual Machine (VM) and operating system runs inside virtual
machine is called guest operating system. Depending on the position of the virtualization layer, there are two
classes of VM architectures, namely the hypervisor architectures like bare-metal or host- based. The
hypervisor is the software used for doing virtualization also known as the VMM (Virtual Machine Monitor).
The hypervisor software provides two different structures of Virtualization namely Hosted structure (also
called Type 2 Virtualization) and Bare-Metal structure (also called Type 1 Virtualization) .
To implement Hosted structure, a base OS needs to be installed first over which VMM can be installed. The
hosted structure is simple solution to run multiple desktop OS independently. Fig.
2.2.2 (a) and (b) shows Windows running on Linux base OS and Linux running on Windows base OS using
hosted Hypervisor
The popular hosted hypervisors are QEMU, VMware Workstation, Microsoft Virtual PC, Oracle
VirtualBox etc.
It does not allow guest OS to directly access the hardware instead it has to go through
base OS, which increases resource overhead.
It has very slow and degraded virtual machines performance due to relying on intermediate
host OS for getting hardware access.
In Bare-Metal Structure, the VMM can be directly installed on the top of Hardware, therefore no
intermediate host OS is needed. The VMM can directly communicate with the hardware and does not
rely on the host system for pass through permission which results in better performance,
scalability and stability. The Bare-Metal structure is shown in Fig.
2.2.3. (See Fig. 2.2.3 on next page).
Bare-metal virtualization is mostly used in enterprise data centers for getting the advanced features like
resource pooling, high availability, disaster recovery and security.
The popular Bare-Metal Hypervisors are Citrix Xen Server, VMware ESXI and Microsoft Hyper
V.
The virtualization is implemented at various levels by creating a software abstraction layer between host
OS and Guest OS. The main function of software layer is to virtualize physical hardware of host machine
in to virtual resources used by VMs by using various operational layers. The different levels at which the
virtualization can be implemented is shown in Fig. 2.3.1. There are five implementation levels of
virtualization, that are Instruction Set Architecture (ISA) level, Hardware level, Operating System level,
Library support level and Application level which are explained as follows.
That is emulator works by translating instructions from the guest platform to instructions of the host
platform. These instructions would include both processor oriented (add, sub, jump etc.), and the I/O
specific (IN/OUT) instructions for the devices. Although this virtual machine architecture works fine
in terms of simplicity and robustness, it has its own pros and cons.
The advantages of ISA are, it provides ease of implementation while dealing with multiple platforms
and it can easily provide infrastructure through which one can create virtual machines based on x86
platforms such as Sparc and Alpha. The disadvantage of ISA is since every instruction issued by the
emulated computer needs to be interpreted in software first which degrades the performance.
a) Boochs
It is a highly portable emulator that can be run on most popular platforms that include x86, PowerPC,
Alpha, Sun, and MIPS. It can be compiled to emulate most of the versions of x86 machines including 386,
486, Pentium, Pentium Pro or AMD64 CPU, including optional MMX, SSE, SSE2, and 3DNow instructions.
b) QEMU
QEMU (Quick Emulator) is a fast processor emulator that uses a portable dynamic translator. It supports
two operating modes: user space only, and full system emulation. In the earlier mode, QEMU can launch
Linux processes compiled for one CPU on another CPU, or for cross- compilation and cross-debugging. In the
later mode, it can emulate a full system that includes a processor and several peripheral devices. It supports
IIIemulation
IT of a number of processor architectures that includes x86, ARM,CCS335-Cloud
PowerPC, andComputing
Sparc.
c) Crusoe
The Crusoe processor comes with a dynamic x86 emulator, called code morphing engine that can execute
any x 86 based application on top of it. The Crusoe is designed to
handle the x86 ISA’s precise exception semantics without constraining speculative scheduling. This is
accomplished by shadowing all registers holding the x86 state.
d) BIRD
BIRD is an interpretation engine for x86 binaries that currently supports only x86 as the host ISA and aims
to extend for other architectures as well. It exploits the similarity between the architectures and tries to
execute as many instructions as possible on the native hardware. All other instructions are supported
through software emulation.
a) VMware
The VMware products are targeted towards x86-based workstations and servers. Thus, it has to deal with
the complications that arise as x86 is not a fully-virtualizable architecture. The VMware deals with this
problem by using a patent-pending technology that dynamically rewrites portions of the hosted machine code
to insert traps wherever VMM intervention is required. Although it solves the problem, it adds some overhead
due to the translation and execution costs. VMware tries to reduce the cost by caching the results and reusing
them wherever possible. Nevertheless, it again adds some caching cost that is hard to avoid.
b) Virtual PC
III IT The Microsoft Virtual PC is based on the Virtual Machine Monitor CCS335-Cloud Computing
(VMM) architecture that lets user to
create and configure one or more virtual machines. It provides most of the functions same as VMware but
additional functions include undo disk operation that lets the user easily undo some previous operations on
the hard disks of a VM. This enables easy data recovery and might come handy in several circumstances.
c) Denali
The Denali project was developed at University of Washington’s to address this issue related to scalability
of VMs. They come up with a new virtualization architecture also called Para virtualization to support
thousands of simultaneous machines, which they call Lightweight Virtual Machines. It tries to increase the
scalability and performance of the Virtual Machines without too much of implementation complexity.
The operating system level virtualization is an abstraction layer between OS and user applications. It
supports multiple Operating Systems and applications to be run simultaneously without required to
reboot or dual boot. The degree of isolation of each OS is very high and can be implemented at low risk
with easy maintenance. The implementation of operating system level virtualization includes, operating
system installation, application suites installation, network setup, and so on. Therefore, if the required
OS is same as the one on the physical machine then the user basically ends up with duplication of most
of the efforts, he/she has already invested in setting up the physical machine. To run applications
properly the operating system keeps the application specific data structure, user level libraries,
environmental settings and other requisites separately.
The key idea behind all the OS-level virtualization techniques is virtualization layer above the OS
produces a partition per virtual machine on demand that is a replica of the operating environment on the
physical machine. With a careful partitioning and multiplexing technique, each VM can be able to
export a full operating environment and fairly isolated from one another and from the underlying
physical machine.
The popular OS level virtualization tools are
a) Jail
The Jail is a FreeBSD based virtualization software that provides the ability to partition an operating
system environment, while maintaining the simplicity of UNIX ”root”
model. The environments captured within a jail are typical system resources and data structures such as
processes, file system, network resources, etc. A process in a partition is referred to as “in jail” process. When
the system is booted up after a fresh install, no processes will be in jail. When a process is placed in a jail, all
of its descendants after the jail creation, along with itself, remain within the jail. A process may not belong to
more than one jail. Jails are created by a privileged process when it invokes a special system call jail. Every
call to jail creates a new jail; the only way for a new process to enter the jail is by inheriting access to the
IIIjail
IT from another process that already in that jail. CCS335-Cloud Computing
b) Ensim
The Ensim virtualizes a server’s native operating system so that it can be partitioned into isolated
computing environments called virtual private servers. These virtual private servers operate independently of
each other, just like a dedicated server. It is commonly used in creating hosting environment to allocate
hardware resources among large number of distributed users.
Most of the system uses extensive set of Application Programmer Interfaces (APIs) instead of legacy
System calls to implement various libraries at user level. Such APIs are designed to hide the operating system
related details to keep it simpler for normal programmers. In this technique, the virtual environment is created
above OS layer and is mostly used to implement different Application Binary Interface (ABI) and Application
Programming Interface (API) using the underlying system.
The example of Library Level Virtualization is WINE. The Wine is an implementation of the Windows
API, and can be used as a library to port Windows applications to UNIX. It is a virtualization layer on top of X
and UNIX to export the Windows API/ABI which allows to run Windows binaries on top of it.
In this abstraction technique the operating systems and user-level programs executes like applications
for the machine. Therefore, specialize instructions are needed for hardware manipulations like I/O
mapped (manipulating the I/O) and Memory mapped (that is mapping a chunk of memory to the I/O and
then manipulating the memory). The group of such special instructions constitutes the application called
Application level Virtualization. The Java Virtual Machine (JVM) is the popular example of application level
virtualization which allows creating a virtual machine at the application-level than OS level. It supports a
new self-defined set of instructions called java byte codes for JVM. Such VMs pose little security threat to
the system while letting the user to play with it like physical machines. Like physical machine it has to provide
an operating environment to its applications either by hosting a commercial operating system, or by coming up
with its own environment.
The comparison between different levels of virtualization is shown in Table 2.4.1.
Every hypervisor uses some mechanisms to control and manage virtualization strategies that allow
different operating systems such as Linux and Windows to be run on the same physical machine,
simultaneously. Depending on the position of the virtualization layer, there are several classes of VM
mechanisms, namely the binary translation, para-virtualization, full virtualization, hardware assist
virtualization and host-based virtualization. The mechanisms of virtualization defined by VMware and other
virtualization providers are explained as follows.
a) Binary translation
In Binary translation of guest OS, The VMM runs at Ring 0 and the guest OS at Ring 1. The VMM checks
the instruction stream and identifies the privileged, control and behavior-sensitive instructions. At the point
when these instructions are identified, they are trapped into the VMM, which emulates the behavior of
these instructions. The method used in this emulation is called binary translation. The binary translation
mechanism is shown in Fig. 2.5.3.
III IT CCS335-Cloud Computing
b) Full Virtualization
In full virtualization, host OS doesn’t require any modification to its OS code. Instead it relies on binary
translation to virtualize the execution of some sensitive, non-virtualizable instructions or execute trap. Most of
the guest operating systems and their applications composed of critical and noncritical instructions. These
instructions are executed with the help of binary translation mechanism.
With full virtualization, noncritical instructions run on the hardware directly while critical instructions are
discovered and replaced with traps into the VMM to be emulated by software. In a host- based virtualization,
both host OS and guest OS takes part in virtualization where virtualization software layer lies between them.
Therefore, full virtualization works with binary translation to perform direct execution of instructions
where guest OS is completely decoupled from the underlying hardware and consequently, it is unaware that it
is being virtualized. The full virtualization gives degraded performance, because it involves binary translation
of instructions first rather than executing which is rather time-consuming. Specifically, the full virtualization
of I/O intensive applications is a really a big challenge as Binary translation employs a code cache to store
translated instructions to improve performance, however it expands the cost of memory usage.
c) Host-based virtualization
In host-based virtualization, the virtualization layer runs on top of the host OS and guest OS runs over
the virtualization layer. Therefore, host OS is responsible for managing the hardware and control the
instructions executed by guest OS.
The host- based virtualization doesn’t require to modify the code in host OS but virtualization software has
to rely on the host OS to provide device drivers and other low-level services. This architecture simplifies the
VM design with ease of deployment but gives degraded performance compared to other hypervisor
architectures because of host OS interventions.
The host OS performs four layers of mapping during any IO request by guest OS or VMM which
downgrades performance significantly.
Para-Virtualization
The para-virtualization is one of the efficient virtualization techniques that require explicit modification to
the guest operating systems. The APIs are required for OS modifications in user applications which are
provided by para-virtualized VM.
In some of the virtualized system, performance degradation becomes the critical issue. Therefore, para-
virtualization attempts to reduce the virtualization overhead, and thus improve performance by modifying
only the guest OS kernel. The para-virtualization architecture is shown in Fig. 2.5.4.
III IT CCS335-Cloud Computing
Some disadvantages of para-virtualization are although para-virtualization reduces CPU overhead, but still
has many issues with compatibility and portability of virtual system, it incurs high cost for implementation
and maintenance and performance of virtualization varies due to workload variation. The popular examples
of para- virtualization are Xen, KVM, and VMware ESXi.
IIIa)
IT Para-Virtualization with Compiler Support CCS335-Cloud Computing
The para-virtualization supports privileged instructions to be executed at run time. As full virtualization
architecture executes the sensitive privileged instructions by intercepting and emulating them at
runtime, para-virtualization can handle such instructions at compile time. In Para-Virtualization with Compiler
Support thee guest OS kernel is modified to replace the privileged and sensitive instructions with hypercalls to
the hypervisor or VMM at compile time itself. The Xen hypervisor assumes such para-virtualization
architecture.
Here, guest OS running in a guest domain may run at Ring 1 instead of at Ring 0 that’s why guest OS may
not be able to execute some privileged and sensitive instructions. Therefore, such privileged instructions are
implemented by hypercalls to the hypervisor. So, after replacing the instructions with hypercalls, the modified
guest OS emulates the behavior of the original guest OS.
Virtualization of CPU, Memory, And I/O Devices
5. Explain in detail about Virtualization of CPU, Memory, And I/O Devices.( Nov/Dec 2021)
Virtualization of CPU
The CPU Virtualization is related to range protection levels called rings in which code can execute. The
Intel x86 architecture of CPU offers four levels of privileges known as Ring 0, 1, 2 and 3.
In binary translation, the virtual machine issues privileged instructions contained within their compile code.
The VMM takes control on these instructions and changes the code under execution to avoid the impact on state
of the system. The full virtualization technique does not need to modify host operating system. It relies on
binary translation to trap and virtualize the execution of certain instructions.
The noncritical instructions directly run on the hardware while critical instructions have to be discovered first
then they are replaced with
III IT CCS335-Cloud Computing
Virtualization Of Memory
6. Explain in detail bout virtualization of memory with an example.
Virtualization of Memory
The memory virtualization involves physical memory to be shared and dynamically allocated
to virtual machines. In a traditional execution environment, the operating system is responsible for
maintaining the mappings of virtual memory to machine memory using page tables. The page table is a single-
stage mapping from virtual memory to machine memory. All recent x86 CPUs comprises built-in Memory
Management Unit (MMU) and a Translation Lookaside Buffer (TLB) to improve the virtual memory
performance. However, in a virtual execution environment, the mapping is required from virtual memory to
physical memory and physical memory to machine memory; hence it requires two-stage mapping process.
The modern OS provides virtual memory support that is similar to memory virtualization. The Virtualized
memory is seen by the applications as a contiguous address space which is not tied to the underlying
physical memory in the system. The operating system is responsible for mappings the virtual page numbers to
physical page numbers stored in page tables. To optimize the Virtual memory performance all modern x86
CPUs include a Memory Management Unit (MMU) and a Translation Lookaside Buffer (TLB). Therefore, to
run multiple virtual machines with Guest OS on a single system, the MMU has to be virtualized shown in
IIIFig.
IT 2.7.1. CCS335-Cloud Computing
The Guest OS is responsible for controlling the mapping of virtual addresses to the guest memory physical
addresses, but the Guest OS cannot have direct access to the actual machine memory. The VMM is
responsible for mapping the Guest physical memory to the actual machine memory, and it uses shadow page
tables to accelerate the mappings. The VMM uses TLB (Translation Lookaside Buffer) hardware to map the
virtual memory directly to the machine memory to avoid the two levels of translation on every access. When
the guest OS changes the virtual memory to physical memory mapping, the VMM updates the shadow page
tables to enable a direct lookup. The hardware-assisted memory virtualization by AMD processor provides
hardware assistance to the two-stage address translation in a virtual execution environment by
using a technology called nested paging.
The virtualization of devices and I/O’s is bit difficult than CPU virtualization. It involves managing
the routing of I/O requests between virtual devices and the shared physical hardware. The software based
I/O virtualization and management techniques can be used for device and I/O virtualization to enables a
rich set of features and simplified management. The network is the integral component of the system
which enables communication between different VMs. The I/O virtualization provides virtual NICs and
switches that create virtual networks between the virtual machines without the network traffic and consuming
bandwidth on the physical network. The NIC teaming allows multiple physical NICS to be appearing as one
and provides failover transparency for virtual machines. It allows virtual machines to be seamlessly relocated
to different systems using VMware VMotion by keeping their existing MAC addresses. The key for effective
I/O virtualization is to preserve the virtualization benefits with minimum CPU utilization. Fig. 2.7.2 shows
device and I/O virtualization.
III IT CCS335-Cloud Computing
The virtual devices shown in above Fig. 2.7.2 can be effectively emulate on well- known
hardware and can translate the virtual machine requests to the system hardware. The standardize device
drivers help for virtual machine standardization. The portability in I/O Virtualization allows all the virtual
machines across the platforms to be configured and run on the same virtual hardware regardless of their actual
physical hardware in the system. There are three ways of implementing I/O virtualization. The full device
emulation approach emulates well-known real-world devices where all the functions of device such as
enumeration, identification, interrupt and DMA are replicated in software. The para-virtualization method of
IO virtualization uses split driver model that consist of frontend and backend drivers. The front- end driver
runs on Domain U which manages I/O request of guest OS. The backend driver runs Domain 0 which
manages real I/O devices with multiplexing of I/O data of different VMs. They interact with each other via
block of shared memory. The direct I/O virtualization let the VM to access devices directly.it mainly focus
on networking of mainframes. There are four methods to implement I/O virtualization namely full device
emulation, para- virtualization, and direct I/O virtualization and through self-virtualized I/O.
In full device emulation, the IO devices are virtualized using emulation software. This method can emulate
all well-known and real-world devices. The emulation software is responsible for performing all the functions
of a devices or bus infrastructure, such as device enumeration, identification, interrupts, and DMA which
are replicated. The software runs inside the VMM and acts as a virtual device. In this method, the I/O access
requests of the guest OS are trapped in the VMM which interacts with the I/O devices. The multiple VMs
share a single hardware device for running them concurrently. However, software emulation consumes more
time in IO access that’s why it runs much slower than the hardware it emulates.
In para-virtualization method of I/O virtualization, the split driver model is used which consist of
frontend driver and backend driver. It is used in Xen hypervisor with different drivers like Domain 0 and
Domain U. The frontend driver runs in Domain U while backend driver runs in Domain 0. Both the drivers
IIIinteract
IT CCS335-Cloud
with each other via a block of shared memory. The frontend driver Computing
is responsible for managing the
I/O requests of the guest OSes while backend driver is responsible for managing the real I/O devices and
multiplexing the I/O data of different VMs.
The para-virtualization method of I/O virtualization achieves better device performance than full device
emulation but with a higher CPU overhead.
In direct I/O virtualization, the virtual machines can access IO devices directly. It does not have to rely on
any emulator of VMM. It has capability to give better IO performance without high CPU costs than para-
virtualization method. It was designed for focusing on networking for mainframes.
In self-virtualized I/O method, the rich resources of a multicore processor and harnessed together. The
self-virtualized I/O encapsulates all the tasks related with virtualizing an I/O device. The virtual devices
with associated access API to VMs and a management API to the VMM are provided by self-virtualized
I/O that defines one Virtual Interface (VIF) for every kind of virtualized I/O device.
The virtualized I/O interfaces are virtual network interfaces, virtual block devices (disk), virtual camera
devices, and others. The guest OS interacts with the virtual interfaces via device drivers. Each VIF
carries a unique ID for identifying it in self- virtualized I/O and consists of two message queues. One
message queue for outgoing messages to the devices and another is for incoming messages from the devices.
As there are a many of challenges associated with commodity hardware devices, the multiple IO
virtualization techniques need to be incorporated for eliminating those associated challenges like system crash
during reassignment of IO devices, incorrect functioning of IO devices and high overhead of device
emulation.