0% found this document useful (0 votes)
114 views38 pages

Virtualization for IT Students

This document is a seminar report on virtualization submitted to the Department of Computer Studies at The Polytechnic Ibadan in partial fulfillment for the requirement of a Higher National Diploma in Computer Science. The report was written by Adeyemi Adejoke Rofiat and supervised by Mr. G.G.O. Egbedokun. The report provides an introduction to virtualization technologies, a literature review on virtualization at different levels including instruction set architecture and how it complements new generation hardware, and a main discussion on hardware virtualization, the scope of virtualization deployment, global virtualization services, and virtualization fundamentals.

Uploaded by

Bishop Oyebode
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
114 views38 pages

Virtualization for IT Students

This document is a seminar report on virtualization submitted to the Department of Computer Studies at The Polytechnic Ibadan in partial fulfillment for the requirement of a Higher National Diploma in Computer Science. The report was written by Adeyemi Adejoke Rofiat and supervised by Mr. G.G.O. Egbedokun. The report provides an introduction to virtualization technologies, a literature review on virtualization at different levels including instruction set architecture and how it complements new generation hardware, and a main discussion on hardware virtualization, the scope of virtualization deployment, global virtualization services, and virtualization fundamentals.

Uploaded by

Bishop Oyebode
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 38

VIRTUALIZATION

BY

ADEYEMI ADEJOKE ROFIAT


2017235020002

A SEMINAR REPORT SUBMITTED TO THE


DEPARTMENT OF COMPUTER STUDIES.
FACULTY OF SCIENCE, THE POLYTECHNIC,
IBADAN.

IN PARTIAL FULFILMENT FOR THE


REQUIREMENT FOR THE AWARD OF HIGHER
NATIONAL DIPLOMA IN COMPUTER SCIENCE

SUPERVISOR:
MR.G.G.O. EGBEDOKUN

NOVEMBER, 2022

i
CERTIFICATION

This is to certify that this seminar report was written by Adeyemi Adejoke Rofiat with
Matriculation number 2017235020002. Department of Computer Studies, Faculty of
Science, The Polytechnic Ibadan, under my supervision.

_______________________ _____________________
MR EGBEDOKUN G.G.O. DATE
SUPERVISOR

_________________________ _______________________
MR BABATUNDE FADIORA DATE
HEAD OF DEPARTMENT

ii
DEDICATION

This report work is dedicated to Most High God for mercy, wisdom, guidance, protection
and provision at all time.

iii
ACKNOWLEDGEMENT

Foremost, I express my sincere gratitude to God Almighty for His supremacy and grace which
saw me through, nothing can be achieved without him.

My gratitude goes to the Head of Department, MR BABATUNDE FADIORA. For taking care of


us throughout the course of this program and his words of encouragement and support. Thank
you Sir.

I am highly grateful to my supervisor MR EGBEDOKUNG.G.O., for his positive criticism,


support, patience, motivation and immense knowledge throughout the study and research period.
His guidance helped me throughout the time of the research and writing of this dissertation. I
could not have imagined having a better supervisor and mentor for my HND program.

I am greatly short of words to appreciate the pillar behind the garden of my success, my
wonderful parents MR AND MRS ADEYEMI, God be with you and also to my siblings thank
you so much. I appreciate your love and support at all times.

I extend the forearm of my gratitude to every blessed staff of the Department of Computer
studies. May the lord reward you all abundantly Amen.

To all my colleagues you are the best. May the good lord continue to enlarge your coast, See you
at the top.

iv
ABSTRACT

Virtualization has become one of the hottest information technologies in the past few
years that combines or divides computing resources to present one or more operating
system using methodologies like hardware and software partitioning or aggregation,
partial or complete machine simulation, emulation, time sharing, and others. Yet, despite
the proclaimed cost savings and efficiency improvement, implementation of the
virtualization involves high degree of uncertainty, and consequently a great possibility of
failures. Experience from managing the Virtual Machine ware based project activities at
several companies are reported as the examples to illustrate how to increase the chance of
successfully implementing a virtualization project. We first analyze the results and
provide a survey of performance studies completed comparing benchmark results of
various tasks like video editing, video conferencing, application start-up, etc. We then
discuss the performance issues on the host operating system has on virtual environments
and based on these results provide a recommendation for the best host operating system
to use when running virtual machines.

Virtualization technologies find the important application over a wide application over
a wide range of areas such as server consolidation ,secure computing platform supporting
multiples operating systems, kernel, debugging and development, system migration etc.
resulting in wide spread usage. Most of them present similar operating environments to
the end user; however, they tend to vary widely in their levels of abstraction they operate
at and the underlying architecture. Virtualization was invented more than 30 years ago to
allow large expensive main frames to be easily shared among different application
environments. As hardware prices went down, the need for virtualization faded away.
More recently, Virtualization at all levels (system, storage and network) became
important again as a way to improve system security, reliability and availability reduce
costs and provide greater flexibility.

v
Table Of Contents

DEDICATION iii

ACKNOWLEDGEMENT iv

ABSTRACT v

1.0 INTRODUCTION 1
1.1 Virtualization Technologies 2

2.0 LITERATURE REVIEW 11


2.1 BACKGROUND OVERVIEW 11
2.2 VIRTUALIZATION AT THE INSTRUCTION SET ARCHITECTURE
LEVEL 12
2.2.1 BOCHS 13
2.2.2 CRUSOE 14
2.2.3 QEMU 14
2.2.4 BIRD 15
2.3 HOW VIRTUALIZATION COMPLEMENTS NEW GENERATION
HARDWARE 15

3.0 MAIN DISUSSION 17


3.1 Hardware Virtualization 17
3.2 Scope of virtualization deployment 24
3.3 Global Virtualization Services 25
3.4 Virtualization Fundamentals 26

CONCLUSION 28

REFERENCES 31

vi
1.0 INTRODUCTION

Virtualization is one of the hottest trends in information technology today. This is no


accident. While a variety of technologies fall under the virtualization umbrella, all of
them are changing the IT world in significant ways.

This overview introduces Microsoft’s virtualization technologies, focusing on three areas:


hardware virtualization, presentation virtualization, and application virtualization. Since
every technology, virtual or otherwise, must be effectively managed, this discussion also
looks at Microsoft’s management products for a virtual world. The goal is to make clear
what these offerings do, describe a bit about how they do it, and show how they work
together.

Virtual machine concept was in existence since 1960s when it was first developed by
IBM to provide concurrent access to a mainframe computer. Each virtual machine (VM)
used to be an instance of the physical machine that gave users an illusion of accessing the
physical machine directly. It was an elegant and transparent way to enable time-sharing
and resource sharing on the highly expensive hardware. Each VM was a fully protected
and isolated copy of the underlying system. Users could execute, develop, and test
applications without ever having to fear causing a crash to systems used by other users on
the same computer. Virtualization was thus used to reduce the hardware acquisition cost
and improving the productivity by letting more number of the users work on it
simultaneously.

As hardware got cheaper and multiprocessing operating systems emerged, VMs were
almost extinct in 1970s and 1980s. With the emergency of the wide varieties of PC based
hardware and operating systems in 1990, the virtualization ideas were in demand again.
The main use for VMs then was to enable execution of the range of applications,
originally targeted for different hardware and OSes, on a given machine. The trend is
continuing even now. “Virtuality” differs from “reality” only in the formal world, while
possessing a similar essence or effect. In the computer world, a virtual environment
perceives the same as that of a real environment by application program and the rest of
the world, though the underlying mechanisms are formally different. More often than not
(or resource) that has more (or less) capability compared to the physical machine (or
resource) underneath for various reasons.

1
1.1 Virtualization Technologies

To understand modern virtualization technologies, think first about a system without


them. Imagine, for example, an application such as Microsoft Word running on a
standalone desktop computer. Figure 1 shows how this looks.

Operating
System
Hardware

Physical Applica
Machine tion

Figure 1: A system without virtualization

The application is installed and runs directly on the operating system, which in turn
runs directly on the computer’s hardware. The application’s user interface is presented
via a display that’s directly attached to this machine. This simple scenario is familiar to
anybody who’s ever used Windows.

But it’s not the only choice. In fact, it’s often not the best choice. Rather than locking
these various parts together—the operating system to the hardware, the application to the
operating system, and the user interface to the local machine—it’s possible to loosen the
direct reliance these parts have on each other.

Doing this means virtualizing aspects of this environment, something that can be
done in various ways. The operating system can be decoupled from the physical hardware
it runs on using hardware virtualization, for example, while application virtualization
allows an analogous decoupling between the operating system and the applications that
use it. Similarly, presentation virtualization allows separating an application’s user
interface from the physical machine the application runs on. All of these approaches to
virtualization help make the links between components less rigid. This lets hardware and
2
software be used in more diverse ways, and it also makes both easier to change. Given
that most IT professionals spend most of their time working with what’s already installed
rather than rolling out new deployments, making their world more malleable is a good
thing.

Each type of virtualization also brings other benefits specific to the problem it
addresses. Understanding what these are requires knowing more about the technologies
themselves. Accordingly, the next sections take a closer look at each one

1) Hardware Virtualization
For most IT people today, the word “virtualization” conjures up thoughts of running
multiple operating systems on a single physical machine. This is hardware virtualization,
and while it’s not the only important kind of virtualization, it is unquestionably the most
visible today.
The core idea of hardware virtualization is simple: Use software to create a virtual
machine (VM) that emulates a physical computer. By providing multiple VMs at once,
this approach allows running several operating systems simultaneously on a single
physical machine. Figure 2 shows how this looks.

Operating Operating
System 1 System 2 ...

Hardware Virtualization
Hardware

Physical Machine Application

Virtual Machine

Figure 2: Illustrating hardware virtualization

When used on client machines, this approach is often called desktop virtualization,
while using it on server systems is known as server virtualization. Desktop virtualization
can be useful in a variety of situations. One of the most common is to deal with
incompatibility between applications and desktop operating systems. For example,
3
suppose a user running Windows Vista needs to use an application that runs only on
Windows XP with Service Pack 2. By creating a VM that runs this older operating
system, then installing the application in that VM, this problem can be solved.
Yet while desktop virtualization is useful, the real excitement around hardware
virtualization is focused on servers. The primary reason for this is economic: Rather than
paying for many under-utilized server machines, each dedicated to a specific workload,
server virtualization allows consolidating those workloads onto a smaller number of more
fully used machines. This implies fewer people to manage those computers, less space to
house them, and fewer kilowatt hours of power to run them, all of which saves money.
Server virtualization also makes restoring failed systems easier. VMs are stored as
files, and so restoring a failed system can be as simple as copying its file onto a new
machine. Since VMs can have different hardware configurations from the physical
machine on which they’re running, this approach also allows restoring a failed system
onto any available machine. There’s no requirement to use a physically identical system.
Hardware virtualization can be accomplished in various ways, and so Microsoft offers
several different technologies that address this area. They include the following:

 Hyper-V: Part of Windows Server 2008, Hyper-V provides hardware virtualization for
servers.

 Virtual Desktop Infrastructure (VDI): Based on Hyper-V and Windows Vista, VDI
defines a way to create virtual desktops.

 Virtual PC 2007: A free download for Windows Vista and Windows XP, Virtual PC
provides hardware virtualization for desktop systems.

 Microsoft Enterprise Desktop Virtualization (MED-V): Using MED-V, an


administrator can create Virtual PC-based VMs that include one or more applications,
then distribute them to client machines.

All of these technologies are useful in different situations, and all are described in more
detail later in this overview.

2) Presentation Virtualization
Much of the software people use most is designed to both run and present its user
interface on the same machine. The applications in Microsoft Office are one common
example, but there are plenty of others. While accepting this default is fine much of the
time, it’s not without some downside. For example, organizations that manage many
desktop machines must make sure that any sensitive data on those desktops is kept
secure. They’re also obliged to spend significant amounts of time and money managing
the applications resident on those machines. Letting an application execute on a remote

4
server, yet display its user interface locally—presentation virtualization—can help.
Figure 3 shows how this looks.

Presentation
Virtualization

Operating System
Hardware

Physical Machine Application

Virtual Session

Figure 3: Illustrating presentation virtualization

As the figure shows, this approach allows creating virtual sessions, each interacting with
a remote desktop system. The applications executing in those sessions rely on
presentation virtualization to project their user interfaces remotely. Each session might
run only a single application, or it might present its user with a complete desktop offering
multiple applications. In either case, several virtual sessions can use the same installed
copy of an application.
Running applications on a shared server like this offers several benefits, including the
following:

5
 Data can be centralized, storing it safely on a central server rather than on multiple
desktop machines. This improves security, since information isn’t spread across many
different systems.

 The cost of managing applications can be significantly reduced. Instead of updating


each application on each individual desktop, for example, only the single shared copy
on the server needs to be changed. Presentation virtualization also allows using simpler
desktop operating system images or specialized desktop devices, commonly called
thin clients, both of which can lower management costs.

 Organizations need no longer worry about incompatibilities between an application


and a desktop operating system. While desktop virtualization can also solve this
problem, as described earlier, it’s sometimes simpler to run the application on a central
server, then use presentation virtualization to make the application accessible to clients
running any operating system.

 In some cases, presentation virtualization can improve performance. For example,


think about a client/server application that pulls large amounts of data from a central
database down to the client. If the network link between the client and the server is
slow or congested, this application will also be slow. One way to improve its
performance is to run the entire application—both client and server—on a machine
with a high-bandwidth connection to the database, then use presentation virtualization
to make the application available to its users.

Microsoft’s presentation virtualization technology is Windows Terminal Services. First


released for Windows NT 4, it’s now a standard part of Windows Server 2008. Terminal
Services lets an ordinary Windows desktop application run on a shared server machine
yet present its user interface on a remote system, such as a desktop computer or thin
client. While remote interfaces haven’t always been viewed through the lens of
virtualization, this perspective can provide a useful way to think about this widely used
technology.

3) Application Virtualization
Virtualization provides an abstracted view of some computing resource. Rather than
run directly on a physical computer, for example, hardware virtualization lets an
operating system run on a software abstraction of a machine. Similarly, presentation
virtualization lets an application’s user interface be abstracted to a remote device. In both
cases, virtualization loosens an otherwise tight bond between components.
Another bond that can benefit from more abstraction is the connection between an

6
application and the operating system it runs on. Every application depends on its
operating system for a range of services, including memory allocation, device drivers,
and much more. Incompatibilities between an application and its operating system can be
addressed by either hardware virtualization or presentation virtualization, as described
earlier. But what about incompatibilities between two applications installed on the same
instance of an operating system? Applications commonly share various things with other
applications on their system, yet this sharing can be problematic. For example, one
application might require a specific version of a dynamic link library (DLL) to function,
while another application on that system might require a different version of the same
DLL. Installing both applications leads to what’s commonly known as DLL hell, where
one of them overwrites the version required by the other. To avoid this, organizations
often perform extensive testing before installing a new application, an approach that’s
workable but time-consuming and expensive.
Application virtualization solves this problem by creating application-specific copies
of all shared resources, as Figure 4 illustrates. The problematic things an application
might share with other applications on its system—registry entries, specific DLLs, and
more—are instead packaged with it, creating a virtual application. When a virtual
application is deployed, it uses its own copy of these shared resources.

7
Application
Logic

Application
Virtualization

Operating System
Hardware

Physical Machine Application

Virtual Application

Figure 4: Illustrating application virtualization

Application virtualization makes deployment significantly easier. Since applications no


longer compete for DLL versions or other shared aspects of their environment, there’s no
need to test new applications for conflicts with existing applications before they’re rolled
out. And as Figure 4 suggests, these virtual applications can run alongside ordinary
applications—not everything needs to be virtualized.
Microsoft Application Virtualization, called App-V for short, is Microsoft’s technology
for this area. An App-V administrator can create virtual applications, then deploy those
applications as needed. By providing an abstracted view of key parts of the system,
application virtualization reduces the time and expense required to deploy and update
applications.

8
A virtualization layer, thus, provides infrastructure support using the lower-level
resources to create multiple virtual machines that are independent of and isolated from
each other. Sometimes, such a virtualization layer is also called Virtual Machine Monitor
(VMM). Although traditionally VMM is used to mean a virtualization layer right on top
of the hardware and below the operating system, we might use it to represent a generic
layer in many cases. There can be innumerous reasons how virtualization can be useful in
practical scenarios, a few of which are the following:

1. Server Consolidation: To consolidate workloads of multiple under-utilized


machines to fewer machines to save on hardware, management, and administration
of the infrastructure.
2. Application Consolidation: A legacy application might require newer hardware
and / or operating systems. Fulfillment of the need of such legacy applications
could be served well by virtualizing the newer hardware and providing its access
to others.
3. Sandboxing: Virtual machines are useful to provide secure, isolated environment
(sandboxes) for running foreign or less-trusted applications. Virtualization
technology can, thus, help build secure computing platforms.
4. Multiple execution environments: Virtualization can be used to create multiple
execution environments (in all possible ways) and can increase the QoS by
guaranteeing specified amount of resources.
5. Virtual Hardware: it can provide the hardware one never had, e.g. virtual SCSL
drives, Virtual Ethernet adapters, virtual Ethernet switches and hubs and so on.
6. Multiple simultaneous OS: it can provide the facility of having multiple
simultaneous operating systems that can run many different kind of applications.
7. Debugging: it can help debug complicated software such as an operating system
or a device by letting the user execute them on an emulated PC with full software
controls.
8. Software Migration: Eases the migration of software and thus helps mobility.
9. Appliances: Lets one package an application with the related operating
environment as an appliance.
10. Testing/QA: Helps produce arbitrary test scenarios that are hard to produce in
reality and thus eases the testing of software.

Accepting the reality, we must admit, machine were never designed with the aim to
support virtualization. Every computer exposes only one bar “bare” machine interface;
hence, would support only one instance of an operating system kernel. For example, only
one software component can be control of the processor at a time and be able to execute a
privileged instruction. Anything that needs to execute a privileged instruction, E.g an I/O
instruction, would need the help of the currently booted kernel. In such scenario, the

9
unprivileged software would trap into the kernel when it tries to execute an instruction
that requires privilege and the kernel executes the instruction. This technique is often
used to virtualize a processor.

In general, a virtualizable processor architecture is define as “an architecture that allows


any instruction inspecting or modifying machine state to be trapped when executed in any
but the most privilege mode”. This provides the basis for the isolation of an entity (read
virtual machine) from the rest of the machine. Processors include instructions that can
affect the state of a machine, such as I/O instruction, or instruction to modify or
manipulate segment registers, processor control registers, and flags, etc. these are called
“sensitive” instructions. These instructions can affect the underlying virtualization layer
and rest of the machine and thus must be trapped for a correct virtualization
implementation. The job of the virtualization layer (e.g the virtual machine monitor) is to
remember the machine state for each of these independent entities and update the state,
when required; only to set that represents the particular entity. However, the world is not
so simple; the most popular architecture, x86, is not virtualizable. It contains instructions
that, when executed in a lower-privilege mode, fails silently rather than causing a trap. So
virtualizing such architectures are not challenging than it seems.

10
2.0 LITERATURE REVIEW

2.1 BACKGROUND OVERVIEW

Virtualization typically involves using special software to safely run multiple operating
systems and applications simultaneously with a single computer (Business Week Online,
2007; Scheier, 2007; Kovar, 2008). The technology initially allows company to
consolidate an array of servers to improve operating efficiency and reduce costs

(Strassmann, 2007; Lechner, 2007; Godbout, 2007). It has since been applied to dealing
with data storage as well as desktop systems (Taylor, 2007). Owning to the success of
tools developed by VMware Inc., the technology has become one of the most talk-about
technology, and has drawn attention from both IS professionals and non-Is executives in
virtually all industries. Despite the potentially significant impact on company’s
operations, this technology has virtually been ignored by the academic researchers.
Almost all articles addressing this technology and its potential effects on company’s
IT/IS management have been done by the practitioners. The technology, however, is not a
panacea, and just like most other information technologies, it comes with a great deal of
risks (Dubie, 2007). For companies considering virtualization projects will have to follow
solid guidelines to help minimize the risks associate with the project. This study attempts
to develop strategies for successfully implementing virtualization project. Lessons
learned from the implementation of a VMware based virtualization project will be used to
formulate the strategies which may be used by companies to reduce the uncertainty
associated with managing their virtualization projects. Literature review of the
virtualization technology will be presented in the following section. Lessons of
implementing the virtualization projects at several companies are reported to compare
VMware and other virtualization solutions throughout the paper. Strategies for
successfully implementing such a project will then be presented. A brief summary of
major lessons learned and some directions for future studies will conclude this paper.

There are many ways to provide the virtualized environment. Virtual platform maps
virtual requests from a virtual machine to physical requests. Virtualization can take place
at several different levels of abstractions, including the ISA (Instruction Set
Architecture), HAL (Hardware Abstraction Layer), operating system level and user level.
ISA-level virtualization emulates the entire instruction set architecture of a virtual
machine in software. HAL-level virtualization exploits the similarity between the
architectures of the virtual and host machine, and directly executes certain instructions on
the native CPU without emulation. Virtualization has become an increasingly hot topic in
information technology lately. This is due to the lower operational costs for business due
to consolidation of resources into a single virtualized system. Provided in this report is a

11
scientific analysis on how performance of different hardware components varies due to
virtualization.

A server is a computer used by multiple users for a specific application or multiple


functions. The benefits of servers include centralized location, ability to control the air
conditioning, consistent data archiving and speed. For example, a print server can
eliminate the need for each user to have a personal printer. This would improve the
efficiency of using network printer and reduce the cost of maintaining multiple
printers. Application servers, while similar to file servers, are unique in that they run
executable applications from the central location. By using application servers, costs
of application software are reduced (Sportack, 1998).
The interdependency between servers and datacenter networks are one of the key
drivers for server management costs, even including LANs and SANs. For years,
blades have been the choice as the chassis backplane. The connectivity problems
associated with blades require IT personnel to get involved every time an existing
blade was replaced or a new one is installed. This involvement required lots of
scheduling activities to ensure the impact to the company was minimized and thus
wasted a great deal of time for those involved. The older blade servers suffered from
“Fibre Channel rates” which in the working environment limited the usefulness. In
order to get past the peak traffic, the IT department would have to over-provide
connectivity which resulted in underutilized networks thus causing a waste of
resources.

There are two solutions for optimization; blade servers or virtualization. The blade
servers are a hardware solution while virtualization is the software solution. The blade
option allows each server to have their own processor and memory but can share
power supplies, cabling and storage. The software virtualization simply pools the
server resources and allocates those resources as needed in a more efficient manner.
There are companies who may choose the hardware route as well as other companies
who may choose the software route. In some cases the two can be used together to
complement each other, (Goodchild, 2007).

2.2 VIRTUALIZATION AT THE INSTRUCTION SET ARCHITECTURE


LEVEL
Virtualization at the instruction set architecture level is implemented by emulating an
instruction set architecture completely in software. A typical computer consist of
processor, memory chips, buses, hard drives, disk controller, timers, multiple I/O devices,
and so on. An Emulator tries to execute instructions issued by the guest machine (the

12
virtual machine that is been emulated) by translating them to a set of native instructions
and then executing them on the available hardware.

Those instruction would include those typical of a processor (add, sub, jmp, etc. on X86),
and the I/O specific instruction for the devices (IN/OUT for example). For an emulator to
successfully emulate a real computer, it has to be able to emulate everything that a real
computer does that includes reading ROM chips, rebooting, switching it on, etc.
Although the Virtual machine architecture works fine in term of simplicity and
robustness, it has its own pros and cons. On the positive side, the architecture provides
ease of implementation while dealing with multiple platforms.

As the emulator works by translating instruction from the guest platform to instructions
of the host platform, it accommodates easily when the guest platform’s architecture
changes as long as there exist a way of accomplishing the same task through instructions
available on the host platform, in this way, it enforces no stringent binding between the
guest and the host platforms. It can easily provide infrastruction through which one can
create virtual machines based on, say X86 on platforms such as X86, spare, Alpha, etc.
However, the architectural portability comes at a price of performance .Since every
instruction issued by the emulated needs to be interpreted in software, the performance
penalty involved a significant. We take three such examples to illustrate this detail.

2.2.1 BOCHS
Bochs is an open source X86 PC emulator written in C++ by a group of people lead by
Kevil Lawton. It is highly portable emulator that can be run on most popular platforms
that include X86, PowerPC, Alpha, Sun, and MIPS. It can be compiled to emulate most
of the various of X86 machine including 386,486, Pentium, Pentium pro or AMD64
CPU, including optimal MMX, SSF, SSE2, and 3DNow instructions. Bochs interpreted
every instruction from power up to reboot, emulates the Intel X86 CPU, custom BIOS
and has device models for all the standard PC, peripheral, keyboard, mouse, VGA
card/monitor, disk, timer chips, network card, etc. Since Bochs simulates the whole PC
environment, the software running in the simulation thinks as if it is running on a real
machine (hence Virtual Machine) and in this way supports execution of unmodified
legacy software (e.g. Operating Systems) on its virtual machine without any difficulty.

These interactions between Bochs and the host operating system can be complicated, and
in some cases be specific to the host platform. Although Bochs is too slow a system to be
used as a virtual machine technology in practice, it has several important applications that
are hard to achieve using commercial emulators. It can have important use in letting
people run application in a second operating system. For example, it lets people run
Windows software on a non-X86 workstation or on an X86-Unix box. Being an open
source, it can be extensively used for debugging new operating systems. For example, if

13
your boot code for a new operating system does not scam to work. Bochs can be used to
go through the memory content, CPU registers, and the other relevant information to fix
the bug. Writing a new device driver, understanding how the hardware devices work and
interact with others, are made easy through Bochs. In industry, it is used to support
legacy application on modern hardware and as a reference model when testing new X86-
compatible hardware.

2.2.2 CRUSOE
Transmitter’s VLIM based Crusoe processor comes with a dynamic X86 emulator,
called “code morphing engine “, and can execute any X86 based application on top of it.
Although the initial intent was to create a simpler, smaller, and less power consuming
chips, which Crusoe is, there were few compiler writers to target this new processor.
Thus, with some additional hardware support, extensive caching, and other optimization
Crusoe was released with an x86 emulator on top of it. It uses 16MB system memory for
use as a “translator cache” that stores recent results of the X 86 to VIEW instruction
translations for future use.

The Crusoe is designed to handle the x 86 ISA’s precise exception semantics without
constraining speculative scheduling. This is accomplished by shadowing all registers
holding the x 86 states. For example, if a division by zero occurs, it rolls back the effect
of all the out of order and aggressively loaded instruction by copying the processor states
from the shadow registers. Alias hardware also helps the Crusoe rearrange code so that
data can be loaded optimally. All these techniques greatly enhance Crusoe’s
performance.

2.2.3 QEMU
QEMU is a fast processor emulator that uses a portable dynamic translator. It supports
two operating modes: user sparc only, and full system emulation. In the earlier mode.
Qemu can launch Linux processes compiled for one CPU on another CPU, or for cross-
compilation and cross debugging. In the later mode, it can emulate a full system that
includes a processor and several peripheral devices. It supports emulation of a number of
processor architectures that include X86, ARM, PowerPC, and Sparc, unlike Bochs that
is closed tied with the X86 architecture. Like Crusoe, it uses a dynamic translation to
native code for reasonable speed. In addition, its features include support for self –
modifying code and precise exception. Both full software MMU and simulation through
mmap () system calls on the host are supported. During dynamic translation, it converts a
piece of encountered code to the host instruction set.

The basic idea is to split every X86 instruction (e.g.).into fewer simpler instructions.
Each simple instruction is implemented by a piece of C code and then a compile time
tool takes the corresponding object file to a dynamic code generator which concentrates

14
the simple instructions to build a function .More such tricks enable QEMU to be
relatively easily portable and simple while achieving high performances. Like Crusoe it
also uses a 16MB translation cache and flushes to empty when it gets filled .it uses a
basic blocks as a translation unit. Self-modifying code is a special challenge in X86
emulation because no instruction cache invalidation is signaled by the application where
code is modified. When translated code is generated for a basic blocks , the
corresponding host page is write protected if it is not ready read-only. Then, if a write
access is done to the page, Linux raises a SEGV signal. QEMU, at this point, invalidates
all the translated code in the page and enables write accesses to the page, to support self-
modifying code. It uses basic block chaining to accelerate most common sequence.

2.2.4 BIRD
BIRD is an interpretation engine for X86 binaries that currently supports only X86 as the
host ISA and aims to extend for other architecture as well. It exploits the similarity
between the architecture and tries to execute as many instructions as possible on the
native hardware. All other instructions are supported through software emulation. Apart
from interpretation, it provides tools for binary analysis as well as binary rewriting that
are useful in eliminating security Vulnerabilities and code optimizations. It combines
static as well as dynamic analysis and translation techniques for efficient emulation of
X86 – based programs. Dynamo is another project that has a similar aim. It uses a cache
to store the so-called “hot-traces”(sequences of frequently executed instruction), e.g. a
block in a for loop, optimizes and execute them natively on the hardware to improve its
performances.

2.3 HOW VIRTUALIZATION COMPLEMENTS NEW GENERATION


HARDWARE
Extensive ‘Scale-Out’ and multi-tier application architectures are becoming increasingly
common, and the adoption of smaller form-factor blade servers is growing dramatically.
Since the transition to blade architectures is generally driven by a desire for physical
consolidation of IT resources, virtualization is an ideal complement for blade servers,
delivering benefits such as resource optimization, operational efficiency and rapid
provisioning. The latest generation of X86 based systems features processors with 64-bit
extensions supporting very large memory capacities. This enhances their ability to host
large, memory-intensive application, as well as allowing many more virtual machines to
be hosted by a physical server deployed within a virtual infrastructure. The continual
decrease in memory costs will further accelerate this trend. Likewise, the forthcoming
dual-core processor technology significantly benefits IT organizations by dramatically
lowering the costs of increased performance compared to traditional single-core systems,
systems utilizing dual-core processors will be less expensive, since only half the number
of socket will be required for the same number of CPUs. By significantly lowering the

15
cost of multi-processor systems, dual-core technology will accelerate data center
combination and virtual infrastructure to the full extent. In particular, the new
virtualization hardware assist enhancement (Intel’s “VT” and AMD’s “Pacifica”) will
enable robust virtualization of the CPU functionality. Such hardware virtualization
support does not replace virtual infrastructure, but allows it to run more efficiently.

Although virtualization is rapidly becoming mainstream technology, the concept has


attracted a huge amount of interest, and enhancement continues to be investigated. One of
these is paravitualization, whereby operating system compatibility is traded off against
performance for certain CPU – bound applications running on system without
virtualization hardware assist. The Para-virtualization models offer potential performance
benefits when a guest operating system or application is ‘aware’ that is running within a
virtualized environment, and has been modified to exploit this, one potential downside of
the approach is that such modified guests cannot ever be migrated back to run on physical
hardware.

In addition to requiring modified guest operation system, Para-visualization leverages a


hypervisor for the underlying technology. In the case of Linux distributions, this
approach requires extensive changes to an operating system kernel so that it can coexist
with the hypervisor. Accordingly, mainstream Linux distributions (Such as Red Hat or
SUSE) cannot be run in a Para-virtualization mode without some level of modification.
Likewise, Microsoft has suggested that a future version of the Windows operating system
will be developed that can coexist with a new hypervisor offering form Microsoft. Yet
Para-visualization is not an entirely new concept .For example, VMware has employed it
by making available as an option enhanced device drivers (packaged as VMware Tools)
that increase the efficiency of great operating system.

16
3.0 MAIN DISUSSION

Every virtualization technology abstracts a computing resource in some way to make it


more useful. Whether the thing being abstracted is a computer, an application’s user
interface, or the environment that application runs in, virtualization boils down to this
core idea. And while all of these technologies are important, it’s fair to say that hardware
virtualization gets the most attention today. Accordingly, it’s the place to begin this
technology tour.

3.1 Hardware Virtualization

Many trends in computing depend on an underlying megatrend: the exponential growth in


processing power described by Moore’s Law. One way to think of this growth is to
realize that in the next two years, processor capability will increase by as much as it has
since the dawn of computing. Given this rate of increase, keeping machines busy gets
harder. Combine this with the difficulty of running different workloads provided by
different applications on a single operating system, and the result is lots of under-utilized
servers. Each one of these server machines costs money to buy, house, run, and manage,
and so a technology for increasing server utilization would be very attractive.

Hardware virtualization is that technology, and it is unquestionably very attractive. While


hardware virtualization is a 40-year-old idea, it’s just now becoming a major part of
mainstream computing environments. In the not-too-distant future, expect to see the
majority of applications deployed on virtualized servers rather than dedicated physical
machines. The benefits are too great to ignore.

To let Windows customers reap these benefits, Microsoft today provides two
fundamental hardware virtualization technologies: Hyper-V for servers and Virtual PC
2007 for desktops. These technologies also underlie other Microsoft offerings, such as
Virtual Desktop Infrastructure (VDI) and the forthcoming Microsoft Enterprise Desktop
Virtualization (MED-V). The following sections describe each of these.
a) Hyper-V
The fundamental problem in hardware virtualization is to create virtual machines in
software. The most efficient way to do this is to rely on a thin layer of software known as
a hypervisor running directly on the hardware. Hyper-V, part of Windows Server 2008, is
Microsoft’s hypervisor. Each VM Hyper-V provides is completely isolated from its

17
fellows, running its own guest operating system. This lets the workload on each one
execute as if it were running on its own physical server.
Parent Child
Partition Partitions

Windows Windows
SUSE
Server
2008
2000
Server
Linux ...
Hyper-V
Hardware

Physical Machine Application


Virtual Machine
Figure 5: Illustrating Hyper-V in Windows Server 2008

As the figure shows, VMs are referred to as partitions in the Hyper-V world. One of
these, the parent partition, must run Windows Server 2008. Child partitions can run any
other supported operating system, including Windows Server 2008, Windows Server
2003, Windows Server 2000, Windows NT 4.0, and Linux distributions such as SUSE
Linux. To create and manage new partitions, an administrator can use an MMC snap-in
running in the parent partition.

This approach is fundamentally different from Microsoft’s earlier server technology for
hardware virtualization. Virtual Server 2005 R2, the virtualization technology used with
Windows Server 2003, ran largely on top of the operating system rather than as a
hypervisor. One important difference between these two approaches is that the low-level
support provided by the Windows hypervisor lets virtualization be done in a more
efficient way, providing better performance.

Other aspects of Hyper-V are also designed for high performance. Hyper-V allows
assigning multiple CPUs to a single VM, for example, and it’s a native 64-bit technology.
(In fact, Hyper-V is part of all three 64-bit editions of Windows Server 2008—Standard,
Enterprise, and Data Center—but it’s not available for 32-bit editions.) The large physical
memory space this allows is useful when many virtual machines must run on a single
physical server. Hyper-V also allows the VMs it supports to have up to 64 gigabytes of

18
memory per virtual machine. And while Hyper-V itself is a 64-bit technology, it supports
both 32-bit and 64- bit VMs. VMs of both types can run simultaneously on a single
Windows Server 2008 machine.

Whatever operating system it’s running, every VM requires storage. To allow this,
Microsoft has defined a virtual hard disk (VHD) format. A VHD is really just a file, but
to a virtual machine, it appears to be an attached disk drive. Guest operating systems and
their applications rely on one or more VHDs for storage. To encourage industry adoption,
Microsoft has included the VHD specification under its Open Specification Promise
(OSP), making this format freely available for others to implement. And because Hyper-
V uses the same VHD format as Virtual Server 2005 R2, migrating workloads from this
earlier technology is relatively straightforward.

Windows Server 2008 has an installation option called Server Core, in which only a
limited subset of the system’s functions is installed. This reduces both the management
effort and the possible security threats for this system, and it’s the recommended choice
for servers that deploy Hyper-V. Systems that use this option have no graphical user
interface support, however, and so they can’t run the Windows Server virtualization
management snap-in locally. Instead, VM management can be done remotely using
Virtual Machine Manager. It’s also possible to deploy Windows Server 2008 in a
traditional non-virtualized configuration. If this is done, Hyper-V isn’t installed, and the
operating system runs directly on the hardware.

Hardware virtualization is a mainstream technology today. Microsoft’s decision to make


it a fundamental part of Windows only underscores its importance. After perhaps the
longest adolescence in computing history, this useful idea has at last reached maturity.

b) Virtual Desktop Infrastructure (VDI)

The VMs that Hyper-V provides can be used in many different ways. Using an approach
called Virtual Desktop Infrastructure, for example, Hyper-V can be used to run client
desktops on a server. Figure 6 illustrates the idea.

19
R R
Parent DP Child DP
Partition Partitions
Window
s Server
Window Window ..
s Vista s Vista
2008 .
Hyper-V
Hardware
Physical
Application
Machine
Virtual
Machine
Figure 6: Illustrating Virtual Desktop Infrastructure
As the figure shows, VDI runs an instance of Windows Vista in each of Hyper-V’s child
partitions (i.e., its VMs). Vista has built-in support for the Remote Desktop Protocol
(RDP), which allows its user interface to be accessed remotely. The client machine can
be anything that supports RDP, such as a thin client, a Macintosh, or a Windows system.
If this sounds similar to presentation virtualization, it is: RDP was created for Windows
Terminal Services. Yet with VDI, there’s no need to deploy an explicit presentation
virtualization technology—Hyper-V and Vista can do the job.

Like presentation virtualization, VDI gives each user her own desktop without the
expense and security risks of installing and managing those desktops on client machines.
Another potential benefit is that servers used for VDI during the day might be re-
deployed for some other purpose at night. When users go home at the end of their work
day, for example, an administrator could use Virtual Machine Manager to store each
user’s VM, and then load other VMs running some other workload, such as overnight
batch processing. When the next workday starts, each user’s desktop can then be restored.
This hosted desktop approach can allow using hardware more efficiently, and it can also
help simplify management of a distributed environment.

c) Virtual PC 2007
The most commercially important aspect of hardware virtualization today is the ability to
consolidate workloads from multiple physical servers onto one machine. Yet it can also

20
be useful to run guest operating systems on a desktop machine. Virtual PC 2007, shown
in Figure 7, is designed for this situation.

Window Window OS/2


s Vista s XP Warp ...
Virtual PC 2007
Windows Vista, Windows XP
Hardware

Physical Machine Application

Virtual Machine

Figure 7: Illustrating Virtual PC 2007


Virtual PC runs on Windows Vista and Windows XP, and it can run a variety of x86-
based guest operating systems. The supported guests include Windows Vista, Windows
XP, Windows 2000, Windows 98, OS/2 Warp, and more. Virtual PC also uses the same
VHD format for storage as Hyper-V and Virtual Server 2005 R2.

As Figure 7 shows, however, Virtual PC takes a different approach from Hyper-V: It


doesn’t use a hypervisor. Instead, the virtualization software runs largely on top of the
client machine’s operating system, much like Virtual Server 2005 R2. While this
approach is typically less efficient than hypervisor-based virtualization, it’s fast enough
for many, probably even most, desktop applications. Native applications can also run
side-by-side with those running inside VMs, so the performance penalty is paid only
when necessary.

d) Looking Ahead: Microsoft Enterprise Desktop Virtualization (MED-V)


Just as the server virtualization provided by Hyper-V can be used in many different ways,
Virtual PC’s desktop virtualization can also be used to do various things. One example of
this is Microsoft Enterprise Desktop Virtualization, scheduled to be released in 2009.

21
With MED-V, clients with Virtual PC installed can have pre-configured VM images
delivered to them from a MED-V server. Figure 8 shows how this looks.

Vis Vis Vis


XP
ta ta ta
MED-V Server

Vis Vis Vis Vis


XP
ta ta ta ta
Virtual PC Virtual PC 2007
Windows2007
Vista Windows Vista
Hardware Hardware

Physical
Application
Machine
Virtual Machine

Figure 8: Illustrating Microsoft Desktop Virtualization (MED-V)


A client machine might run some applications natively and some in VMs, as shown on
the left in the figure, or it might run all of its applications in one or more VMs, as shown
on the right. In either case, a central administrator can create and deliver fully functional
VM images to clients. Those images can contain a single application or multiple
applications, allowing all or part of a user’s desktop to be delivered on demand.
For example, suppose an organization has installed Windows Vista on its clients, but still
needs to use an application that requires Windows XP. An administrator can create a VM
running Windows XP and only this application, then rely on the MED-V Server to
deliver that VM to client machines that need it. An application packaged in this way can
look just like any other application—the user launches it from the Start menu and sees
just its interface—while it actually runs safely within its own virtual machine.

22
While virtualization has been a part of the IT landscape for decades, it is only recently
(in 1998) that benefits of virtualization are delivered to industry-standard x86-based
platforms, which now form the majority of desktop, laptop and server shipments. A key
benefit of virtualization is the ability to run multiple operating systems on a single
physical system and share the underlying hardware resources – known as partitioning.
Today, virtualization can apply to a range of system layers, including hardware-level
virtualization, operating system level virtualization, and high-level language virtual
machines. Hardware-level virtualization was pioneered on IBM mainframes in the
1970s, and then more recently Unix/RISC system vendors began with hardware-based
partitioning capabilities before moving on to software-based partitioning.
Virtualization technology has been around for many years. Virtualization is rapidly
transforming the IT work environment and lets organizations run multiple virtual
machines on a single physical machine, sharing the resources of that single computer
across multiple. Virtualization allowed us to streamline LAN administration as these
activities are able to be performed by fewer, fully dedicated employees from a
centralized location. Our security is strengthened by reducing our physical locations
containing servers and electronic Personally Identifiable Information. Using
virtualization potentially can provide high availability to electronic data by having on-
line data redundancy that was not available in the previous environment. The
consolidation and standardization of LAN services improves the Agency’s operational
efficiency by providing more integration, more streamlining, more standardization, and
more flexibility.
Since the tragedy of September 11, 2001, security has become a major concern within
the federal government in USA and has led to a movement to centralize rather than
decentralize IT systems and resources to protect them better by having fewer facilities to
physically secure and monitor. More recently, concerns about energy utilization have
led to directives to consolidate and centralize servers to save energy. President Obama’s
administration has called for federal agencies to cut energy consumption in their data
centers as part of the broader effort to green federal operations. An executive order was
signed by President Obama on October 5, 2009 requiring Agencies to adopt best
practices to manage their servers in an energy efficient manner. Prior to this initiative,
our Field Office servers operate at about 20-30 percent utilization so energy savings can
be realized by consolidating our servers. The Department of Agriculture’s Chief
Information Officer has directed Agencies to migrate their servers to Data Centers. Due
to the need for NASS (National Agricultural Statistics Service) to maintain a strong
degree of independence as a federal statistical agency, the Department’s CIO has agreed
to allow NASS to continue to manage our LAN servers, but centralize our servers for
security and energy reasons. There are also mandates and initiatives to support telework
23
and remote access to the Agency network. Virtualization can play a role in meeting all
of these overarching directives. With a couple of exceptions, they chose to virtualize the
entire desktop as opposed to the more common technique of virtualizing individual
applications to run from a server. They are using the Citrix XenServer to virtualize the
servers. They are consolidating the 94 Field Office servers into 44 servers with 22
virtualized servers located in Washington and 22 servers located in Kansas City.
Many organizations today rely on some degree of virtualization. Whether it is a
few virtual machines running on a single physical computer or a whole server farm
across multiple root servers, virtualization optimizes investment in hardware and
network infrastructure by:

 Increasing the utilization of underused hardware.


 Improving server availability.
 Reducing IT costs.
Virtualization provides unmatched flexibility, performance, and utilization by allowing
moving server workloads from one virtual workspace to the next, maximizing server
resources on the fly based on business needs.

3.2 Scope of virtualization deployment

In 2011, SearchDataCenter.com released the Data Center Decisions 2011 survey to


gauge reader trends and get an understanding of the factors influencing data center
evolution in the enterprise. They received more than 1,000 responses from a range of
professionals spanning numerous IT roles. In this article, the second in a special report
series, they examine survey results surrounding virtualization, cloud adoption and
technology choices. Based on this article (article is based on SearchDataCenter.com
released the Data Center Decisions 2011 survey data) more organizations are deploying
virtualization across a larger number of physical servers. In 2011, 43% of respondents
deployed virtualization on up to 25% of servers, 28% deployed it on 26% to 50% of
servers, 18% deployed virtualization on 51% to 75% of servers, and 11% virtualized
76% to 100% of their servers. This is a shift from 2010, when far more respondents
deployed virtualization on fewer machines. As time goes on, virtualization should
appear across a larger percentage of data center ecosystems.
The number of virtual workloads deployed on each physical server has increased slowly
since last year. In 2011, 50% of respondents (according on SearchDataCenter.com
released the Data Center Decisions 2011 survey data) reported running fewer than 10
workloads per physical server, 35% run 10 to 20 workloads per server, 10% run 21 to 30

24
workloads per server, and 5% run more than 30 workloads per server. It’s important to
note that a larger percentage of users are running 10 to 20 VMs per server. This trend
should continue as users become more comfortable maximizing physical server
utilization.

3.3 Global Virtualization Services

Central European governments are embracing virtualization and document management


as they scramble to manage records and documents, according to IDC. Two-thirds of
Central European government entities use virtualized servers as their default for new
application deployment, and more than half of their physical servers are virtualized, IDC
reported. About half the government IT decision makers in Central Europe which
includes the Czech Republic, Hungary, Poland, and Slovakia are using or planning to
use cloud technologies, the same report found.
NTT Europe’s Global Virtualization Services offer enterprises a secure, high quality and
easy to manage environment in which enterprise applications can be quickly rolled out.
A comprehensive array of virtualized hosting services available in Europe, Asia and the
US, allow efficiency, cost control and growth management.
As a subsidiary of NTT Communications, NTT Europe offers access to a privately
owned and managed Tier-1 IP network and premium data centers across the globe.
This proposition is crucial to customers looking to serve new markets or to meet legal or
compliance requirements with regard to the physical location of the data being held. The
Global Virtualization Services are guaranteed under a single SLA which centralizes
hosting with one- stop management, support and billing.
NTT Europe’s web-based customer portal provides a self-service area for management
of virtualization solution across all three continents. Experienced technical architects
and VMware specialists are available to assist customers with design, installation,
operation and maintenance.
This paragraph includes the study at the state of the storage virtualization market at the
start of 2008. There are engaged IT professionals at 324 companies in the United States
and Europe to participate in this evaluation. There is found the storage virtualization
market is poised for rapid growth over the next several years. Interest in cutting storage
costs and improving storage resiliency are the most important drivers. Based on this
study, in the United States, 21% of U.S. companies use storage virtualization. However,
this is poised to more than double, as 26% are either currently or planning on
implementing storage virtualization (on average within 6 to 12 months).

25
3.4 Virtualization Fundamentals

Virtualization can be considered IT asset optimization. There are four parts to asset
optimization: rationalization which is the removal of “slack” from the system and match
expenditures with actual needs, optimization, which is the complement to
rationalization by altering actual requirements to gain efficiency and economies of
scale, consolidation which is the process of combining data or applications in an attempt
to increase resource utilization and finally virtualization which is the process of
“combining several operating systems images into a single virtualized platform,
providing economies of scale in resource utilization while maintaining a partition
between operational environments,” (Hillier, 2006).

Storage virtualization allows all hard drives on the system to act like one large pool of
storage drives. This increases the efficiency of storage by allowing files to be stored
wherever there is space, rather than allowing some drives to go underutilized. With
virtualization drives can be added or replaced on the fly since the virtualization software
will reconfigure the network and the affected servers. Mirroring the image and backup
are faster since the only data that is copied is the data that was changed. Such capability
will significantly reduce the scheduled downtime caused by handling these tasks
(Gruman, 2007).

Virtualization software enables companies to use one physical server with the capability
of delivering the performance of multiple servers. Without virtualization, departments

26
needing more resources would have to get the cost of additional equipment approved,
followed by the time to order and receive, then install. Virtualization allows IT
Managers to maximize resources by combining them on a single server (Mullins, 2007).
With virtualization, new applications will be made available within a few minutes and
without the expense of additional equipment (Connor & Mullins, 2007). VMware states
that one server can be a consolidation of 10-15 production systems. Although some
applications are labeled as “non-mission” critical, when a significant amount of users
are affected it becomes mission critical (Stratus Technology, 2007 a and b).

Research shows that with virtualization, many existing strategies governing the
implementation/management of IS projects may not work any longer. For example,
Microsoft is known for its operating system. But with virtualization the operating
system is not as an important issue as it used to be. Third-party vendors making
applications can use a virtual environment and run their own microkernels which
eliminate the conventional operating system. Since the hypervisor is becoming the
intermediary of the data center, vendors are building their layers to tap straight into the
hypervisor. This does not always prevent the dependency on operating system, however
it does limit its influence (Conry-Murray, 2007).

As Chart 1 below shows, 90% of 250 companies surveyed by Information Week are
either already using server virtualization or are planning to in the future. The rest 10%
not planning on implementing virtualization cite the lack of funds as the primary reason.
Some additional reasons are lack of skilled IT personnel and the training that would be
required for the staff to handle the complexity of the virtual environment (Smith, 2007).
To illustrate how to assess the feasibility of implementing a virtualization project, and
how to choose the right technology provider, experiences reported by managements at
several companies are included.

27
CONCLUSION

The pull of virtualization is strong—the economics are too attractive to resist. And for
most organizations, there’s no reason to fight against this pull. Well-managed
virtualization technologies can make their world better.
Microsoft takes a broad view of this area, providing hardware virtualization,
presentation virtualization, application virtualization, and more. The company also takes
a broad view of management, with virtualized technologies given equal weight to their
physical counterparts. As the popularity of virtualization continues to grow, these
technologies are becoming bedrock of modern computing.

In view of the examination, it is too apparent that, to the measurement, the presentation of
frameworks utilizing Windows 10 as native operation system is extensively better
contrasted with Windows 10 Virtualized- Guest frameworks. The vast majority of the
execution misfortune that accompanies virtualized frameworks can be in part ascribed to
the additional overhead presented by virtualization. It is clear that the OS- VM blend
moreover assumes a critical job.

Based on the data gathered throughout this paper, VMware is a solution that Triton
should give serious consideration. While it appears Microsoft will be moving forward
with their virtualization software, it will have to be debugged for some period of time.
VMware has already gone through most debugging processes and has been proven
effectively in many businesses. VMware has high accolades from some large technology
companies like Dell and HP. There are also companies like Stratus who are promoting
their server systems which thrive on the importance of uptime in combination with
VMware.

Research has proven that numerous companies have successfully put into place VMware
virtualization. Although research has some negative comments, the positive does
outweigh the negative. With the expectations of a hypervisor that will not need an
operating system per say, opens up the types of applications which can be run.

When considering the VMware virtualization software, it is also recommended to utilize


two dual-socket, quad-core processors. The will provide a very good price-performance if
used in rack-mounted or blade servers. All the recommendations presented in managerial
implications section may serve as the general guideline for company considering the
virtualization initiatives to increase the chance of successfully implementing the project.
Additional strategies dealing with storage and desktop virtualization will be the focuses
of our studies in the future.

28
To see how to handle the implementation of a virtualization project, let’s start by
examining the case at Triton Systems Inc. The company currently has forty servers
supporting the needs of all users. These servers range in age from one year to five years
old. All servers have a warranty of five years and due to the critical nature of the data
they contain, the servers are replaced just prior to the warranty expiring. As new
servers are required to be purchased, they are considering purchasing servers which will
allow them to take full advantage of server virtualization.

VMware is the top virtualization solution under consideration. Industry researchers


reported that 80% of companies improving utilization by using virtualization chose
VMware software when using x86 servers. VMware offers a “site-recovery manager”
which allows the IT manager to reproduce machines virtually at other locations. The
control layer allowing the flexibility to put data in a remote site or sites would be
invisible to the users (Gruman, 2007). Another possible solution is the “Windows Server
Virtualization” from Microsoft, which will be released the latter half of 2008 as an add-
on to its Windows Server 2008 (Mullins, 2007).

VMware has proven itself a reliable solution that IBM, Hewlett-Packard and Dell all have
plans of embedding virtualization software in their x86 servers. This will allow for easier
setup for companies wanted to use virtual servers (Thibodeau, 2007). Information Week
has states that four out of five companies using virtualization software are using
VMware. It is believed that business technology professional would prefer to buy from
virtualization software vendors instead of system management vendors. Some statistics
state that one in five companies use virtual software from more than one vendor with a
third stating that when utilizing several different vendors, problems are introduced
(Smith, 2007). For all these reported results, the VMware solution was adopted by
management at Triton Systems.

29
There, however, have some concerns with VMware virtualization project. These include
a different way of managing the disks, in which the IT staff will not be able to copy
volumes with partial files but must copy the actual files for backup. Another concern
occurs with setup. Care must be taken when dealing with high and low performance
drives. If lower performance drives are accidently place in high-performance virtual
servers, this could hinder the overall performance including critical applications. While
using virtualization tools is not difficult, it is just a different way than what most IT
professionals have become accustomed to. Choosing the right storage form is also
critical. There are two options, network-based which are utilized by server-based
software or array-based which typically is part of storage management software. The
downfall to array-based is the need for array storage having to be purchased which may
create an expensive vendor lock-in. Network-based seems to be the most flexible and
can be managed from anywhere provided they are available via the storage area network
(SAN) (Gruman, 2007).

George Scangas, manager of IT architecture at Welch’s, stated had Welch’s not had
virtualization, they would have had to build a new data center which could have costs of
the high six figures. By using virtualization, Scangas stated there is an immediate cost
savings from using less cable not to mention the amount of power, although other data
disagrees with this, and rack spaced saved. It is estimated that Welch’s saved at least
$300,000 in hardware costs alone, not including the reduced power bill due to the need
for less cooling. Welch’s currently runs 100 virtual machines and they expect to add an
additional 10 or 20 in the next quarter. Scangas stated his confidence has grown with
VMware as their technology matures such that they are more willing to place “business-
critical programs on VMware”. A study by “The Strategic Council, in June 2007,
reported that 45% of companies considered their virtualization deployments
unsuccessful.” To take that survey a step further, more than one quarter failed to realize
a return on investment and less than ten hit their targeted cost savings. With all this being
said, virtualization is not going anywhere and there is hope that when Microsoft finally
makes its entry will provide an additional boost and spur more adoption of the technology
(Watson, 2007).

In a September 2007 whitepaper, CiRBA stated that virtualization is not just a sizing
exercise, but an effort to ensure all constraints which govern and impact the companies
IT environment are considered during the planning process and how to manage the
virtual environment. The article credited VMware as the industry-leading virtualization
solution but that a company may consider combining it with “accurate intelligence and
focused analytics”, which will allow the current servers into the new virtual configuration
(CiBra, 2008).

30
REFERENCES

Business Week Online (2007), The Virtues of Virtualization, Business Week Online,
12(3), 6.

Bhatt, M., Ahmed, I., & Lin, Z. (2018, February). Using virtual machine introspection
for operating systems security education. In Proceedings of the 49th ACM Technical
Symposium on Computer Science Education (pp. 396-401). ACM.

CiBRA. (2008). Virtualization Analysis for VMware, White paper by CiBra,


May 2008, retrieved from
https://whitepapers.theregister.com/paper/view/10082/cluster-management-and-
virtualization

Dubie, D. (2007). Managing virtualized servers is no easy task, Network World,


11/5/2007, 24(43), 1 and 43-46. Godbout, Y. (2007). The virtual reality, CA Magazine,
Jun/Jul2007, 140(5), 45-47.

Goodchild, J. (2006). Virtualization software or blade servers: Which is right for server
consolidation? Server Virtualization News, June 09, searchservervirtualization.com.

Gruman, G. (2007). Storage Virtualization Takes Off , CIO, 9/15/2007, 20(23), 27-31.
Guerrero, P. (2008). Composite Information Server, DM Review, January, 18(1), 38.

Hassell, J. (2007). Server Virtualization: Getting Started, Computerworld, 5/28/2007,


41(22), 31.

Hayes, F. (2008). Face-off: Virtualization Takes Center Stage: Virtual Stride,


Computerworld, 1/1/2008, 42(1), 22- 24

Hillier, A. (2006). A Quantitative and Analytical Approach to Server Consolidation,


January Free whitepaper from CiRBA.

Humphreys, J. (2007). Enabling Technology for Blade I/0 Virtualization,


CIO, Marcy 20, retrieved from http://www.cio.co.uk/whitepapers/index.cfm?
whitepaperid=5050

Kovar, J. (2007). How To Build A Virtualization Practice, VARBusiness, November,


23(18), 49-52. Kovar, J. (2008). Server Virtualization, VARBusiness, February, 24(2),
38.

31
Lechner, R. (2007). Using virtualization to boost efficiency, Network World, 9/24/2007,
24(37), 24.

Marshall, D. & Knezevic, D. (2007). The new world of


virtualization, retrieved from
http://www.bitpipe.com/detail/RES/1181589520_238.html?
asrc=PAR_AFL_HIGHBEAM

Mullins, R. (2007). VMware’s flying high, but…., Network World, 9/10/2007, 24(35), 1
and 60.

Orfali, R., Harkey, D., & Edwards, J. (1994). Essential Client/Server Survival Guide.
New York: Van Nostrand Reinhold.

Scheier, R. (2007). Virtualization 101, Computerworld, 10/15/2007, 41(42), 54-56.

Smith, L. (2007). The Reality of Going Virtual, InformationWeek, 2/12/2007, 1125, 49-
52,

32

You might also like