Virtualization for IT Students
Virtualization for IT Students
BY
SUPERVISOR:
MR.G.G.O. EGBEDOKUN
NOVEMBER, 2022
i
CERTIFICATION
This is to certify that this seminar report was written by Adeyemi Adejoke Rofiat with
Matriculation number 2017235020002. Department of Computer Studies, Faculty of
Science, The Polytechnic Ibadan, under my supervision.
_______________________ _____________________
MR EGBEDOKUN G.G.O. DATE
SUPERVISOR
_________________________ _______________________
MR BABATUNDE FADIORA DATE
HEAD OF DEPARTMENT
ii
DEDICATION
This report work is dedicated to Most High God for mercy, wisdom, guidance, protection
and provision at all time.
iii
ACKNOWLEDGEMENT
Foremost, I express my sincere gratitude to God Almighty for His supremacy and grace which
saw me through, nothing can be achieved without him.
I am greatly short of words to appreciate the pillar behind the garden of my success, my
wonderful parents MR AND MRS ADEYEMI, God be with you and also to my siblings thank
you so much. I appreciate your love and support at all times.
I extend the forearm of my gratitude to every blessed staff of the Department of Computer
studies. May the lord reward you all abundantly Amen.
To all my colleagues you are the best. May the good lord continue to enlarge your coast, See you
at the top.
iv
ABSTRACT
Virtualization has become one of the hottest information technologies in the past few
years that combines or divides computing resources to present one or more operating
system using methodologies like hardware and software partitioning or aggregation,
partial or complete machine simulation, emulation, time sharing, and others. Yet, despite
the proclaimed cost savings and efficiency improvement, implementation of the
virtualization involves high degree of uncertainty, and consequently a great possibility of
failures. Experience from managing the Virtual Machine ware based project activities at
several companies are reported as the examples to illustrate how to increase the chance of
successfully implementing a virtualization project. We first analyze the results and
provide a survey of performance studies completed comparing benchmark results of
various tasks like video editing, video conferencing, application start-up, etc. We then
discuss the performance issues on the host operating system has on virtual environments
and based on these results provide a recommendation for the best host operating system
to use when running virtual machines.
Virtualization technologies find the important application over a wide application over
a wide range of areas such as server consolidation ,secure computing platform supporting
multiples operating systems, kernel, debugging and development, system migration etc.
resulting in wide spread usage. Most of them present similar operating environments to
the end user; however, they tend to vary widely in their levels of abstraction they operate
at and the underlying architecture. Virtualization was invented more than 30 years ago to
allow large expensive main frames to be easily shared among different application
environments. As hardware prices went down, the need for virtualization faded away.
More recently, Virtualization at all levels (system, storage and network) became
important again as a way to improve system security, reliability and availability reduce
costs and provide greater flexibility.
v
Table Of Contents
DEDICATION iii
ACKNOWLEDGEMENT iv
ABSTRACT v
1.0 INTRODUCTION 1
1.1 Virtualization Technologies 2
CONCLUSION 28
REFERENCES 31
vi
1.0 INTRODUCTION
Virtual machine concept was in existence since 1960s when it was first developed by
IBM to provide concurrent access to a mainframe computer. Each virtual machine (VM)
used to be an instance of the physical machine that gave users an illusion of accessing the
physical machine directly. It was an elegant and transparent way to enable time-sharing
and resource sharing on the highly expensive hardware. Each VM was a fully protected
and isolated copy of the underlying system. Users could execute, develop, and test
applications without ever having to fear causing a crash to systems used by other users on
the same computer. Virtualization was thus used to reduce the hardware acquisition cost
and improving the productivity by letting more number of the users work on it
simultaneously.
As hardware got cheaper and multiprocessing operating systems emerged, VMs were
almost extinct in 1970s and 1980s. With the emergency of the wide varieties of PC based
hardware and operating systems in 1990, the virtualization ideas were in demand again.
The main use for VMs then was to enable execution of the range of applications,
originally targeted for different hardware and OSes, on a given machine. The trend is
continuing even now. “Virtuality” differs from “reality” only in the formal world, while
possessing a similar essence or effect. In the computer world, a virtual environment
perceives the same as that of a real environment by application program and the rest of
the world, though the underlying mechanisms are formally different. More often than not
(or resource) that has more (or less) capability compared to the physical machine (or
resource) underneath for various reasons.
1
1.1 Virtualization Technologies
Operating
System
Hardware
Physical Applica
Machine tion
The application is installed and runs directly on the operating system, which in turn
runs directly on the computer’s hardware. The application’s user interface is presented
via a display that’s directly attached to this machine. This simple scenario is familiar to
anybody who’s ever used Windows.
But it’s not the only choice. In fact, it’s often not the best choice. Rather than locking
these various parts together—the operating system to the hardware, the application to the
operating system, and the user interface to the local machine—it’s possible to loosen the
direct reliance these parts have on each other.
Doing this means virtualizing aspects of this environment, something that can be
done in various ways. The operating system can be decoupled from the physical hardware
it runs on using hardware virtualization, for example, while application virtualization
allows an analogous decoupling between the operating system and the applications that
use it. Similarly, presentation virtualization allows separating an application’s user
interface from the physical machine the application runs on. All of these approaches to
virtualization help make the links between components less rigid. This lets hardware and
2
software be used in more diverse ways, and it also makes both easier to change. Given
that most IT professionals spend most of their time working with what’s already installed
rather than rolling out new deployments, making their world more malleable is a good
thing.
Each type of virtualization also brings other benefits specific to the problem it
addresses. Understanding what these are requires knowing more about the technologies
themselves. Accordingly, the next sections take a closer look at each one
1) Hardware Virtualization
For most IT people today, the word “virtualization” conjures up thoughts of running
multiple operating systems on a single physical machine. This is hardware virtualization,
and while it’s not the only important kind of virtualization, it is unquestionably the most
visible today.
The core idea of hardware virtualization is simple: Use software to create a virtual
machine (VM) that emulates a physical computer. By providing multiple VMs at once,
this approach allows running several operating systems simultaneously on a single
physical machine. Figure 2 shows how this looks.
Operating Operating
System 1 System 2 ...
Hardware Virtualization
Hardware
Virtual Machine
When used on client machines, this approach is often called desktop virtualization,
while using it on server systems is known as server virtualization. Desktop virtualization
can be useful in a variety of situations. One of the most common is to deal with
incompatibility between applications and desktop operating systems. For example,
3
suppose a user running Windows Vista needs to use an application that runs only on
Windows XP with Service Pack 2. By creating a VM that runs this older operating
system, then installing the application in that VM, this problem can be solved.
Yet while desktop virtualization is useful, the real excitement around hardware
virtualization is focused on servers. The primary reason for this is economic: Rather than
paying for many under-utilized server machines, each dedicated to a specific workload,
server virtualization allows consolidating those workloads onto a smaller number of more
fully used machines. This implies fewer people to manage those computers, less space to
house them, and fewer kilowatt hours of power to run them, all of which saves money.
Server virtualization also makes restoring failed systems easier. VMs are stored as
files, and so restoring a failed system can be as simple as copying its file onto a new
machine. Since VMs can have different hardware configurations from the physical
machine on which they’re running, this approach also allows restoring a failed system
onto any available machine. There’s no requirement to use a physically identical system.
Hardware virtualization can be accomplished in various ways, and so Microsoft offers
several different technologies that address this area. They include the following:
Hyper-V: Part of Windows Server 2008, Hyper-V provides hardware virtualization for
servers.
Virtual Desktop Infrastructure (VDI): Based on Hyper-V and Windows Vista, VDI
defines a way to create virtual desktops.
Virtual PC 2007: A free download for Windows Vista and Windows XP, Virtual PC
provides hardware virtualization for desktop systems.
All of these technologies are useful in different situations, and all are described in more
detail later in this overview.
2) Presentation Virtualization
Much of the software people use most is designed to both run and present its user
interface on the same machine. The applications in Microsoft Office are one common
example, but there are plenty of others. While accepting this default is fine much of the
time, it’s not without some downside. For example, organizations that manage many
desktop machines must make sure that any sensitive data on those desktops is kept
secure. They’re also obliged to spend significant amounts of time and money managing
the applications resident on those machines. Letting an application execute on a remote
4
server, yet display its user interface locally—presentation virtualization—can help.
Figure 3 shows how this looks.
Presentation
Virtualization
Operating System
Hardware
Virtual Session
As the figure shows, this approach allows creating virtual sessions, each interacting with
a remote desktop system. The applications executing in those sessions rely on
presentation virtualization to project their user interfaces remotely. Each session might
run only a single application, or it might present its user with a complete desktop offering
multiple applications. In either case, several virtual sessions can use the same installed
copy of an application.
Running applications on a shared server like this offers several benefits, including the
following:
5
Data can be centralized, storing it safely on a central server rather than on multiple
desktop machines. This improves security, since information isn’t spread across many
different systems.
3) Application Virtualization
Virtualization provides an abstracted view of some computing resource. Rather than
run directly on a physical computer, for example, hardware virtualization lets an
operating system run on a software abstraction of a machine. Similarly, presentation
virtualization lets an application’s user interface be abstracted to a remote device. In both
cases, virtualization loosens an otherwise tight bond between components.
Another bond that can benefit from more abstraction is the connection between an
6
application and the operating system it runs on. Every application depends on its
operating system for a range of services, including memory allocation, device drivers,
and much more. Incompatibilities between an application and its operating system can be
addressed by either hardware virtualization or presentation virtualization, as described
earlier. But what about incompatibilities between two applications installed on the same
instance of an operating system? Applications commonly share various things with other
applications on their system, yet this sharing can be problematic. For example, one
application might require a specific version of a dynamic link library (DLL) to function,
while another application on that system might require a different version of the same
DLL. Installing both applications leads to what’s commonly known as DLL hell, where
one of them overwrites the version required by the other. To avoid this, organizations
often perform extensive testing before installing a new application, an approach that’s
workable but time-consuming and expensive.
Application virtualization solves this problem by creating application-specific copies
of all shared resources, as Figure 4 illustrates. The problematic things an application
might share with other applications on its system—registry entries, specific DLLs, and
more—are instead packaged with it, creating a virtual application. When a virtual
application is deployed, it uses its own copy of these shared resources.
7
Application
Logic
Application
Virtualization
Operating System
Hardware
Virtual Application
8
A virtualization layer, thus, provides infrastructure support using the lower-level
resources to create multiple virtual machines that are independent of and isolated from
each other. Sometimes, such a virtualization layer is also called Virtual Machine Monitor
(VMM). Although traditionally VMM is used to mean a virtualization layer right on top
of the hardware and below the operating system, we might use it to represent a generic
layer in many cases. There can be innumerous reasons how virtualization can be useful in
practical scenarios, a few of which are the following:
Accepting the reality, we must admit, machine were never designed with the aim to
support virtualization. Every computer exposes only one bar “bare” machine interface;
hence, would support only one instance of an operating system kernel. For example, only
one software component can be control of the processor at a time and be able to execute a
privileged instruction. Anything that needs to execute a privileged instruction, E.g an I/O
instruction, would need the help of the currently booted kernel. In such scenario, the
9
unprivileged software would trap into the kernel when it tries to execute an instruction
that requires privilege and the kernel executes the instruction. This technique is often
used to virtualize a processor.
10
2.0 LITERATURE REVIEW
Virtualization typically involves using special software to safely run multiple operating
systems and applications simultaneously with a single computer (Business Week Online,
2007; Scheier, 2007; Kovar, 2008). The technology initially allows company to
consolidate an array of servers to improve operating efficiency and reduce costs
(Strassmann, 2007; Lechner, 2007; Godbout, 2007). It has since been applied to dealing
with data storage as well as desktop systems (Taylor, 2007). Owning to the success of
tools developed by VMware Inc., the technology has become one of the most talk-about
technology, and has drawn attention from both IS professionals and non-Is executives in
virtually all industries. Despite the potentially significant impact on company’s
operations, this technology has virtually been ignored by the academic researchers.
Almost all articles addressing this technology and its potential effects on company’s
IT/IS management have been done by the practitioners. The technology, however, is not a
panacea, and just like most other information technologies, it comes with a great deal of
risks (Dubie, 2007). For companies considering virtualization projects will have to follow
solid guidelines to help minimize the risks associate with the project. This study attempts
to develop strategies for successfully implementing virtualization project. Lessons
learned from the implementation of a VMware based virtualization project will be used to
formulate the strategies which may be used by companies to reduce the uncertainty
associated with managing their virtualization projects. Literature review of the
virtualization technology will be presented in the following section. Lessons of
implementing the virtualization projects at several companies are reported to compare
VMware and other virtualization solutions throughout the paper. Strategies for
successfully implementing such a project will then be presented. A brief summary of
major lessons learned and some directions for future studies will conclude this paper.
There are many ways to provide the virtualized environment. Virtual platform maps
virtual requests from a virtual machine to physical requests. Virtualization can take place
at several different levels of abstractions, including the ISA (Instruction Set
Architecture), HAL (Hardware Abstraction Layer), operating system level and user level.
ISA-level virtualization emulates the entire instruction set architecture of a virtual
machine in software. HAL-level virtualization exploits the similarity between the
architectures of the virtual and host machine, and directly executes certain instructions on
the native CPU without emulation. Virtualization has become an increasingly hot topic in
information technology lately. This is due to the lower operational costs for business due
to consolidation of resources into a single virtualized system. Provided in this report is a
11
scientific analysis on how performance of different hardware components varies due to
virtualization.
There are two solutions for optimization; blade servers or virtualization. The blade
servers are a hardware solution while virtualization is the software solution. The blade
option allows each server to have their own processor and memory but can share
power supplies, cabling and storage. The software virtualization simply pools the
server resources and allocates those resources as needed in a more efficient manner.
There are companies who may choose the hardware route as well as other companies
who may choose the software route. In some cases the two can be used together to
complement each other, (Goodchild, 2007).
12
virtual machine that is been emulated) by translating them to a set of native instructions
and then executing them on the available hardware.
Those instruction would include those typical of a processor (add, sub, jmp, etc. on X86),
and the I/O specific instruction for the devices (IN/OUT for example). For an emulator to
successfully emulate a real computer, it has to be able to emulate everything that a real
computer does that includes reading ROM chips, rebooting, switching it on, etc.
Although the Virtual machine architecture works fine in term of simplicity and
robustness, it has its own pros and cons. On the positive side, the architecture provides
ease of implementation while dealing with multiple platforms.
As the emulator works by translating instruction from the guest platform to instructions
of the host platform, it accommodates easily when the guest platform’s architecture
changes as long as there exist a way of accomplishing the same task through instructions
available on the host platform, in this way, it enforces no stringent binding between the
guest and the host platforms. It can easily provide infrastruction through which one can
create virtual machines based on, say X86 on platforms such as X86, spare, Alpha, etc.
However, the architectural portability comes at a price of performance .Since every
instruction issued by the emulated needs to be interpreted in software, the performance
penalty involved a significant. We take three such examples to illustrate this detail.
2.2.1 BOCHS
Bochs is an open source X86 PC emulator written in C++ by a group of people lead by
Kevil Lawton. It is highly portable emulator that can be run on most popular platforms
that include X86, PowerPC, Alpha, Sun, and MIPS. It can be compiled to emulate most
of the various of X86 machine including 386,486, Pentium, Pentium pro or AMD64
CPU, including optimal MMX, SSF, SSE2, and 3DNow instructions. Bochs interpreted
every instruction from power up to reboot, emulates the Intel X86 CPU, custom BIOS
and has device models for all the standard PC, peripheral, keyboard, mouse, VGA
card/monitor, disk, timer chips, network card, etc. Since Bochs simulates the whole PC
environment, the software running in the simulation thinks as if it is running on a real
machine (hence Virtual Machine) and in this way supports execution of unmodified
legacy software (e.g. Operating Systems) on its virtual machine without any difficulty.
These interactions between Bochs and the host operating system can be complicated, and
in some cases be specific to the host platform. Although Bochs is too slow a system to be
used as a virtual machine technology in practice, it has several important applications that
are hard to achieve using commercial emulators. It can have important use in letting
people run application in a second operating system. For example, it lets people run
Windows software on a non-X86 workstation or on an X86-Unix box. Being an open
source, it can be extensively used for debugging new operating systems. For example, if
13
your boot code for a new operating system does not scam to work. Bochs can be used to
go through the memory content, CPU registers, and the other relevant information to fix
the bug. Writing a new device driver, understanding how the hardware devices work and
interact with others, are made easy through Bochs. In industry, it is used to support
legacy application on modern hardware and as a reference model when testing new X86-
compatible hardware.
2.2.2 CRUSOE
Transmitter’s VLIM based Crusoe processor comes with a dynamic X86 emulator,
called “code morphing engine “, and can execute any X86 based application on top of it.
Although the initial intent was to create a simpler, smaller, and less power consuming
chips, which Crusoe is, there were few compiler writers to target this new processor.
Thus, with some additional hardware support, extensive caching, and other optimization
Crusoe was released with an x86 emulator on top of it. It uses 16MB system memory for
use as a “translator cache” that stores recent results of the X 86 to VIEW instruction
translations for future use.
The Crusoe is designed to handle the x 86 ISA’s precise exception semantics without
constraining speculative scheduling. This is accomplished by shadowing all registers
holding the x 86 states. For example, if a division by zero occurs, it rolls back the effect
of all the out of order and aggressively loaded instruction by copying the processor states
from the shadow registers. Alias hardware also helps the Crusoe rearrange code so that
data can be loaded optimally. All these techniques greatly enhance Crusoe’s
performance.
2.2.3 QEMU
QEMU is a fast processor emulator that uses a portable dynamic translator. It supports
two operating modes: user sparc only, and full system emulation. In the earlier mode.
Qemu can launch Linux processes compiled for one CPU on another CPU, or for cross-
compilation and cross debugging. In the later mode, it can emulate a full system that
includes a processor and several peripheral devices. It supports emulation of a number of
processor architectures that include X86, ARM, PowerPC, and Sparc, unlike Bochs that
is closed tied with the X86 architecture. Like Crusoe, it uses a dynamic translation to
native code for reasonable speed. In addition, its features include support for self –
modifying code and precise exception. Both full software MMU and simulation through
mmap () system calls on the host are supported. During dynamic translation, it converts a
piece of encountered code to the host instruction set.
The basic idea is to split every X86 instruction (e.g.).into fewer simpler instructions.
Each simple instruction is implemented by a piece of C code and then a compile time
tool takes the corresponding object file to a dynamic code generator which concentrates
14
the simple instructions to build a function .More such tricks enable QEMU to be
relatively easily portable and simple while achieving high performances. Like Crusoe it
also uses a 16MB translation cache and flushes to empty when it gets filled .it uses a
basic blocks as a translation unit. Self-modifying code is a special challenge in X86
emulation because no instruction cache invalidation is signaled by the application where
code is modified. When translated code is generated for a basic blocks , the
corresponding host page is write protected if it is not ready read-only. Then, if a write
access is done to the page, Linux raises a SEGV signal. QEMU, at this point, invalidates
all the translated code in the page and enables write accesses to the page, to support self-
modifying code. It uses basic block chaining to accelerate most common sequence.
2.2.4 BIRD
BIRD is an interpretation engine for X86 binaries that currently supports only X86 as the
host ISA and aims to extend for other architecture as well. It exploits the similarity
between the architecture and tries to execute as many instructions as possible on the
native hardware. All other instructions are supported through software emulation. Apart
from interpretation, it provides tools for binary analysis as well as binary rewriting that
are useful in eliminating security Vulnerabilities and code optimizations. It combines
static as well as dynamic analysis and translation techniques for efficient emulation of
X86 – based programs. Dynamo is another project that has a similar aim. It uses a cache
to store the so-called “hot-traces”(sequences of frequently executed instruction), e.g. a
block in a for loop, optimizes and execute them natively on the hardware to improve its
performances.
15
cost of multi-processor systems, dual-core technology will accelerate data center
combination and virtual infrastructure to the full extent. In particular, the new
virtualization hardware assist enhancement (Intel’s “VT” and AMD’s “Pacifica”) will
enable robust virtualization of the CPU functionality. Such hardware virtualization
support does not replace virtual infrastructure, but allows it to run more efficiently.
16
3.0 MAIN DISUSSION
To let Windows customers reap these benefits, Microsoft today provides two
fundamental hardware virtualization technologies: Hyper-V for servers and Virtual PC
2007 for desktops. These technologies also underlie other Microsoft offerings, such as
Virtual Desktop Infrastructure (VDI) and the forthcoming Microsoft Enterprise Desktop
Virtualization (MED-V). The following sections describe each of these.
a) Hyper-V
The fundamental problem in hardware virtualization is to create virtual machines in
software. The most efficient way to do this is to rely on a thin layer of software known as
a hypervisor running directly on the hardware. Hyper-V, part of Windows Server 2008, is
Microsoft’s hypervisor. Each VM Hyper-V provides is completely isolated from its
17
fellows, running its own guest operating system. This lets the workload on each one
execute as if it were running on its own physical server.
Parent Child
Partition Partitions
Windows Windows
SUSE
Server
2008
2000
Server
Linux ...
Hyper-V
Hardware
As the figure shows, VMs are referred to as partitions in the Hyper-V world. One of
these, the parent partition, must run Windows Server 2008. Child partitions can run any
other supported operating system, including Windows Server 2008, Windows Server
2003, Windows Server 2000, Windows NT 4.0, and Linux distributions such as SUSE
Linux. To create and manage new partitions, an administrator can use an MMC snap-in
running in the parent partition.
This approach is fundamentally different from Microsoft’s earlier server technology for
hardware virtualization. Virtual Server 2005 R2, the virtualization technology used with
Windows Server 2003, ran largely on top of the operating system rather than as a
hypervisor. One important difference between these two approaches is that the low-level
support provided by the Windows hypervisor lets virtualization be done in a more
efficient way, providing better performance.
Other aspects of Hyper-V are also designed for high performance. Hyper-V allows
assigning multiple CPUs to a single VM, for example, and it’s a native 64-bit technology.
(In fact, Hyper-V is part of all three 64-bit editions of Windows Server 2008—Standard,
Enterprise, and Data Center—but it’s not available for 32-bit editions.) The large physical
memory space this allows is useful when many virtual machines must run on a single
physical server. Hyper-V also allows the VMs it supports to have up to 64 gigabytes of
18
memory per virtual machine. And while Hyper-V itself is a 64-bit technology, it supports
both 32-bit and 64- bit VMs. VMs of both types can run simultaneously on a single
Windows Server 2008 machine.
Whatever operating system it’s running, every VM requires storage. To allow this,
Microsoft has defined a virtual hard disk (VHD) format. A VHD is really just a file, but
to a virtual machine, it appears to be an attached disk drive. Guest operating systems and
their applications rely on one or more VHDs for storage. To encourage industry adoption,
Microsoft has included the VHD specification under its Open Specification Promise
(OSP), making this format freely available for others to implement. And because Hyper-
V uses the same VHD format as Virtual Server 2005 R2, migrating workloads from this
earlier technology is relatively straightforward.
Windows Server 2008 has an installation option called Server Core, in which only a
limited subset of the system’s functions is installed. This reduces both the management
effort and the possible security threats for this system, and it’s the recommended choice
for servers that deploy Hyper-V. Systems that use this option have no graphical user
interface support, however, and so they can’t run the Windows Server virtualization
management snap-in locally. Instead, VM management can be done remotely using
Virtual Machine Manager. It’s also possible to deploy Windows Server 2008 in a
traditional non-virtualized configuration. If this is done, Hyper-V isn’t installed, and the
operating system runs directly on the hardware.
The VMs that Hyper-V provides can be used in many different ways. Using an approach
called Virtual Desktop Infrastructure, for example, Hyper-V can be used to run client
desktops on a server. Figure 6 illustrates the idea.
19
R R
Parent DP Child DP
Partition Partitions
Window
s Server
Window Window ..
s Vista s Vista
2008 .
Hyper-V
Hardware
Physical
Application
Machine
Virtual
Machine
Figure 6: Illustrating Virtual Desktop Infrastructure
As the figure shows, VDI runs an instance of Windows Vista in each of Hyper-V’s child
partitions (i.e., its VMs). Vista has built-in support for the Remote Desktop Protocol
(RDP), which allows its user interface to be accessed remotely. The client machine can
be anything that supports RDP, such as a thin client, a Macintosh, or a Windows system.
If this sounds similar to presentation virtualization, it is: RDP was created for Windows
Terminal Services. Yet with VDI, there’s no need to deploy an explicit presentation
virtualization technology—Hyper-V and Vista can do the job.
Like presentation virtualization, VDI gives each user her own desktop without the
expense and security risks of installing and managing those desktops on client machines.
Another potential benefit is that servers used for VDI during the day might be re-
deployed for some other purpose at night. When users go home at the end of their work
day, for example, an administrator could use Virtual Machine Manager to store each
user’s VM, and then load other VMs running some other workload, such as overnight
batch processing. When the next workday starts, each user’s desktop can then be restored.
This hosted desktop approach can allow using hardware more efficiently, and it can also
help simplify management of a distributed environment.
c) Virtual PC 2007
The most commercially important aspect of hardware virtualization today is the ability to
consolidate workloads from multiple physical servers onto one machine. Yet it can also
20
be useful to run guest operating systems on a desktop machine. Virtual PC 2007, shown
in Figure 7, is designed for this situation.
Virtual Machine
21
With MED-V, clients with Virtual PC installed can have pre-configured VM images
delivered to them from a MED-V server. Figure 8 shows how this looks.
Physical
Application
Machine
Virtual Machine
22
While virtualization has been a part of the IT landscape for decades, it is only recently
(in 1998) that benefits of virtualization are delivered to industry-standard x86-based
platforms, which now form the majority of desktop, laptop and server shipments. A key
benefit of virtualization is the ability to run multiple operating systems on a single
physical system and share the underlying hardware resources – known as partitioning.
Today, virtualization can apply to a range of system layers, including hardware-level
virtualization, operating system level virtualization, and high-level language virtual
machines. Hardware-level virtualization was pioneered on IBM mainframes in the
1970s, and then more recently Unix/RISC system vendors began with hardware-based
partitioning capabilities before moving on to software-based partitioning.
Virtualization technology has been around for many years. Virtualization is rapidly
transforming the IT work environment and lets organizations run multiple virtual
machines on a single physical machine, sharing the resources of that single computer
across multiple. Virtualization allowed us to streamline LAN administration as these
activities are able to be performed by fewer, fully dedicated employees from a
centralized location. Our security is strengthened by reducing our physical locations
containing servers and electronic Personally Identifiable Information. Using
virtualization potentially can provide high availability to electronic data by having on-
line data redundancy that was not available in the previous environment. The
consolidation and standardization of LAN services improves the Agency’s operational
efficiency by providing more integration, more streamlining, more standardization, and
more flexibility.
Since the tragedy of September 11, 2001, security has become a major concern within
the federal government in USA and has led to a movement to centralize rather than
decentralize IT systems and resources to protect them better by having fewer facilities to
physically secure and monitor. More recently, concerns about energy utilization have
led to directives to consolidate and centralize servers to save energy. President Obama’s
administration has called for federal agencies to cut energy consumption in their data
centers as part of the broader effort to green federal operations. An executive order was
signed by President Obama on October 5, 2009 requiring Agencies to adopt best
practices to manage their servers in an energy efficient manner. Prior to this initiative,
our Field Office servers operate at about 20-30 percent utilization so energy savings can
be realized by consolidating our servers. The Department of Agriculture’s Chief
Information Officer has directed Agencies to migrate their servers to Data Centers. Due
to the need for NASS (National Agricultural Statistics Service) to maintain a strong
degree of independence as a federal statistical agency, the Department’s CIO has agreed
to allow NASS to continue to manage our LAN servers, but centralize our servers for
security and energy reasons. There are also mandates and initiatives to support telework
23
and remote access to the Agency network. Virtualization can play a role in meeting all
of these overarching directives. With a couple of exceptions, they chose to virtualize the
entire desktop as opposed to the more common technique of virtualizing individual
applications to run from a server. They are using the Citrix XenServer to virtualize the
servers. They are consolidating the 94 Field Office servers into 44 servers with 22
virtualized servers located in Washington and 22 servers located in Kansas City.
Many organizations today rely on some degree of virtualization. Whether it is a
few virtual machines running on a single physical computer or a whole server farm
across multiple root servers, virtualization optimizes investment in hardware and
network infrastructure by:
24
workloads per server, and 5% run more than 30 workloads per server. It’s important to
note that a larger percentage of users are running 10 to 20 VMs per server. This trend
should continue as users become more comfortable maximizing physical server
utilization.
25
3.4 Virtualization Fundamentals
Virtualization can be considered IT asset optimization. There are four parts to asset
optimization: rationalization which is the removal of “slack” from the system and match
expenditures with actual needs, optimization, which is the complement to
rationalization by altering actual requirements to gain efficiency and economies of
scale, consolidation which is the process of combining data or applications in an attempt
to increase resource utilization and finally virtualization which is the process of
“combining several operating systems images into a single virtualized platform,
providing economies of scale in resource utilization while maintaining a partition
between operational environments,” (Hillier, 2006).
Storage virtualization allows all hard drives on the system to act like one large pool of
storage drives. This increases the efficiency of storage by allowing files to be stored
wherever there is space, rather than allowing some drives to go underutilized. With
virtualization drives can be added or replaced on the fly since the virtualization software
will reconfigure the network and the affected servers. Mirroring the image and backup
are faster since the only data that is copied is the data that was changed. Such capability
will significantly reduce the scheduled downtime caused by handling these tasks
(Gruman, 2007).
Virtualization software enables companies to use one physical server with the capability
of delivering the performance of multiple servers. Without virtualization, departments
26
needing more resources would have to get the cost of additional equipment approved,
followed by the time to order and receive, then install. Virtualization allows IT
Managers to maximize resources by combining them on a single server (Mullins, 2007).
With virtualization, new applications will be made available within a few minutes and
without the expense of additional equipment (Connor & Mullins, 2007). VMware states
that one server can be a consolidation of 10-15 production systems. Although some
applications are labeled as “non-mission” critical, when a significant amount of users
are affected it becomes mission critical (Stratus Technology, 2007 a and b).
Research shows that with virtualization, many existing strategies governing the
implementation/management of IS projects may not work any longer. For example,
Microsoft is known for its operating system. But with virtualization the operating
system is not as an important issue as it used to be. Third-party vendors making
applications can use a virtual environment and run their own microkernels which
eliminate the conventional operating system. Since the hypervisor is becoming the
intermediary of the data center, vendors are building their layers to tap straight into the
hypervisor. This does not always prevent the dependency on operating system, however
it does limit its influence (Conry-Murray, 2007).
As Chart 1 below shows, 90% of 250 companies surveyed by Information Week are
either already using server virtualization or are planning to in the future. The rest 10%
not planning on implementing virtualization cite the lack of funds as the primary reason.
Some additional reasons are lack of skilled IT personnel and the training that would be
required for the staff to handle the complexity of the virtual environment (Smith, 2007).
To illustrate how to assess the feasibility of implementing a virtualization project, and
how to choose the right technology provider, experiences reported by managements at
several companies are included.
27
CONCLUSION
The pull of virtualization is strong—the economics are too attractive to resist. And for
most organizations, there’s no reason to fight against this pull. Well-managed
virtualization technologies can make their world better.
Microsoft takes a broad view of this area, providing hardware virtualization,
presentation virtualization, application virtualization, and more. The company also takes
a broad view of management, with virtualized technologies given equal weight to their
physical counterparts. As the popularity of virtualization continues to grow, these
technologies are becoming bedrock of modern computing.
In view of the examination, it is too apparent that, to the measurement, the presentation of
frameworks utilizing Windows 10 as native operation system is extensively better
contrasted with Windows 10 Virtualized- Guest frameworks. The vast majority of the
execution misfortune that accompanies virtualized frameworks can be in part ascribed to
the additional overhead presented by virtualization. It is clear that the OS- VM blend
moreover assumes a critical job.
Based on the data gathered throughout this paper, VMware is a solution that Triton
should give serious consideration. While it appears Microsoft will be moving forward
with their virtualization software, it will have to be debugged for some period of time.
VMware has already gone through most debugging processes and has been proven
effectively in many businesses. VMware has high accolades from some large technology
companies like Dell and HP. There are also companies like Stratus who are promoting
their server systems which thrive on the importance of uptime in combination with
VMware.
Research has proven that numerous companies have successfully put into place VMware
virtualization. Although research has some negative comments, the positive does
outweigh the negative. With the expectations of a hypervisor that will not need an
operating system per say, opens up the types of applications which can be run.
28
To see how to handle the implementation of a virtualization project, let’s start by
examining the case at Triton Systems Inc. The company currently has forty servers
supporting the needs of all users. These servers range in age from one year to five years
old. All servers have a warranty of five years and due to the critical nature of the data
they contain, the servers are replaced just prior to the warranty expiring. As new
servers are required to be purchased, they are considering purchasing servers which will
allow them to take full advantage of server virtualization.
VMware has proven itself a reliable solution that IBM, Hewlett-Packard and Dell all have
plans of embedding virtualization software in their x86 servers. This will allow for easier
setup for companies wanted to use virtual servers (Thibodeau, 2007). Information Week
has states that four out of five companies using virtualization software are using
VMware. It is believed that business technology professional would prefer to buy from
virtualization software vendors instead of system management vendors. Some statistics
state that one in five companies use virtual software from more than one vendor with a
third stating that when utilizing several different vendors, problems are introduced
(Smith, 2007). For all these reported results, the VMware solution was adopted by
management at Triton Systems.
29
There, however, have some concerns with VMware virtualization project. These include
a different way of managing the disks, in which the IT staff will not be able to copy
volumes with partial files but must copy the actual files for backup. Another concern
occurs with setup. Care must be taken when dealing with high and low performance
drives. If lower performance drives are accidently place in high-performance virtual
servers, this could hinder the overall performance including critical applications. While
using virtualization tools is not difficult, it is just a different way than what most IT
professionals have become accustomed to. Choosing the right storage form is also
critical. There are two options, network-based which are utilized by server-based
software or array-based which typically is part of storage management software. The
downfall to array-based is the need for array storage having to be purchased which may
create an expensive vendor lock-in. Network-based seems to be the most flexible and
can be managed from anywhere provided they are available via the storage area network
(SAN) (Gruman, 2007).
George Scangas, manager of IT architecture at Welch’s, stated had Welch’s not had
virtualization, they would have had to build a new data center which could have costs of
the high six figures. By using virtualization, Scangas stated there is an immediate cost
savings from using less cable not to mention the amount of power, although other data
disagrees with this, and rack spaced saved. It is estimated that Welch’s saved at least
$300,000 in hardware costs alone, not including the reduced power bill due to the need
for less cooling. Welch’s currently runs 100 virtual machines and they expect to add an
additional 10 or 20 in the next quarter. Scangas stated his confidence has grown with
VMware as their technology matures such that they are more willing to place “business-
critical programs on VMware”. A study by “The Strategic Council, in June 2007,
reported that 45% of companies considered their virtualization deployments
unsuccessful.” To take that survey a step further, more than one quarter failed to realize
a return on investment and less than ten hit their targeted cost savings. With all this being
said, virtualization is not going anywhere and there is hope that when Microsoft finally
makes its entry will provide an additional boost and spur more adoption of the technology
(Watson, 2007).
In a September 2007 whitepaper, CiRBA stated that virtualization is not just a sizing
exercise, but an effort to ensure all constraints which govern and impact the companies
IT environment are considered during the planning process and how to manage the
virtual environment. The article credited VMware as the industry-leading virtualization
solution but that a company may consider combining it with “accurate intelligence and
focused analytics”, which will allow the current servers into the new virtual configuration
(CiBra, 2008).
30
REFERENCES
Business Week Online (2007), The Virtues of Virtualization, Business Week Online,
12(3), 6.
Bhatt, M., Ahmed, I., & Lin, Z. (2018, February). Using virtual machine introspection
for operating systems security education. In Proceedings of the 49th ACM Technical
Symposium on Computer Science Education (pp. 396-401). ACM.
Goodchild, J. (2006). Virtualization software or blade servers: Which is right for server
consolidation? Server Virtualization News, June 09, searchservervirtualization.com.
Gruman, G. (2007). Storage Virtualization Takes Off , CIO, 9/15/2007, 20(23), 27-31.
Guerrero, P. (2008). Composite Information Server, DM Review, January, 18(1), 38.
31
Lechner, R. (2007). Using virtualization to boost efficiency, Network World, 9/24/2007,
24(37), 24.
Mullins, R. (2007). VMware’s flying high, but…., Network World, 9/10/2007, 24(35), 1
and 60.
Orfali, R., Harkey, D., & Edwards, J. (1994). Essential Client/Server Survival Guide.
New York: Van Nostrand Reinhold.
Smith, L. (2007). The Reality of Going Virtual, InformationWeek, 2/12/2007, 1125, 49-
52,
32