0% found this document useful (0 votes)
23 views11 pages

Sever Virtualization

Uploaded by

kruthika
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views11 pages

Sever Virtualization

Uploaded by

kruthika
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

In virtualized data centers, I/O performance problems are caused by running numerous virtual

machines (VMs) on one server. In early server virtualization implementations, the number of
virtual machines per server was typically limited to six or less. But it was found that it could safely
run seven or more applications per server, often using 80 percentage of total server capacity, an
improvement over the average 5 to 15 percentage utilized with non-virtualized servers.

Sever virtualization
Server virtualization is a virtualization technique that involves partitioning a physical server
into a number of small, virtual servers with the help of virtualization software. In server
virtualization, each virtual server runs multiple operating system instances at the same time.
Physical and Logical Partitioning

Physical Partitioning
Physical partitioning refers to the separation of execution environments by literally using
physically separate hardware devices or by using physical hardware-based partitioning. Physical
hardware separation (see Figure 1) is the easiest and most prolific form of partitioning. It is the
practice of using multiple physical servers (or computers), each having a single instance of an
operating system, to serve different needs or purposes. A common example of this practice is an
organization that has a separate server for each of the following server roles or applications: file
sharing, print spooling, domain authentication and authorization, database server, email server,
web server, FTP server, and so on.

Fig No.10. Physical Partitioning

Physical Hardware Separation

Physical hardware separation is commonly driven by applications that have mutually exclusive
requirements of the hardware devices or operating system, applications with high resource
utilization, or server stability. Some applications cannot share the same environment as other

11
applications because they were designed to have control of the entire operating system and the
server's resources. Other applications have high resource utilization such as processor, memory,
intensive disk I/O, storage size limitations, or network adapter bandwidth that often requires a
dedicated server. Consider installing Microsoft SQL Server and Microsoft Exchange Server
together on a single server. Although it is technically possible, it is likely that each server
application will perform poorly as they continuously compete for control of the same system
resources.
Applications have also been separated onto dedicated physical servers because of the idea that it
is generally more stable to have fewer applications running within the same instance of an
operating system, usually because of poor resource management by the operating system (whether
true or perceived) or because of poor resource handling or wasteful resource utilization by an
application. Another reason that applications are installed on separate hardware is because they
were designed and written for different operating systems or different hardware architectures. For
example, Microsoft BizTalk Server must be installed onto a Windows operating system-based
server with an Intel processor whereas applications written for IBM's OS/400 operating system
must be installed on IBM AS/400 server hardware, while applications written for the Microsoft
Windows operating system must be installed on IA-32 compatible computer hardware.
Hardware partitioning, shown in Figure 2 is a highly-specialized hardware technology that allows
the computing resources of a single, physical computer to be divided into multiple partitions, often
called hard partitions, each of which can host its own, isolated instance of an operating system.
Hardware partitioning has existed for quite some time, originating in high-end mainframe systems
from IBM.

Fig No.11. Physical Hardware Separation

Hardware Partitioning
Today, there are several hardware partitioning technologies available, although each
implementation is proprietary and requires very specific server hardware and software to be used.
In some implementations, only one or two very specific operating systems are supported. In
general, all of the required components of a system featuring hardware partitioning are only
12
available from a single vendor, due to their proprietary nature.
One of the key advantages of hardware partitioning is its very efficient resource sharing and
management capabilities. These systems are much more efficient than equivalent software
partitioning systems because the resource management between hard partitions is handled using
separate hardware components (chips, circuits, memory, storage, etc.). The specialized software
(sometimes referred to as microcode) that performs the actual resource management resides in the
specialized resource management hardware components as well. As a result, the available
performance in each hard partition is maximized and remains unaffected by the resource
management system's overhead.
This is very different from software partitioning technologies where the partitioning occurs in
software that is executed using the same hardware that is being managed and shared. Another
advantage, available in some implementations of hardware partitioning, is electrical isolation of
each hard partition. Electrical isolation in hardware partitioning systems allows a hardware fault
to occur in one hard partition while not affecting any other hard partition in the same system.
Systems offering hardware partitioning technologies are usually mid-range to high-end computing
systems that are generally very scalable (usually scaling up) and robust.
Hardware partitioning systems have several disadvantages: expensive, proprietary hardware and
software, additional costs incurred by the support and maintenance of the proprietary hardware,
limited support for various operating systems, limited hardware portability for an existing installed
base of hard partitions, and vendor lock-in. Proprietary hardware and software systems almost
always have additional costs for installation, training, support, and maintenance due to the lack of
expertise of most IT organizations with these types of systems.
Often vendors will only allow their services organization to perform the installation and support
of these systems. Hardware partitioning systems generally only allow one type of operating system
to be installed; of course, each hard partition supports a separate instance of that operating system.
There are some systems that are more flexible and support more than one operating system, but it
is almost always limited to operating systems provided by the vendor. Aside from limited operating
system support, hardware partitioning systems have very limited portability of existing partitions.
Generally, these partitions may only be moved to systems comprised of the same vendor's
hardware because of the lack of complete hardware abstraction.

Investment in proprietary hardware and software systems almost always leads an organization into
what is known as vendor lock-in. Vendor lock-in occurs when an organization has made an
investment in a single vendor's proprietary technologies and it thus is cost prohibitive for the
organization to move to a different technology or vendor. Vendor lock-in affects organizations for
long periods of time, usually five or more years at a time. Of course, the vendor reaps the benefit
of vendor lock-in because of the expense and difficulty an organization faces when attempting to
switch to another vendor. The organization suffers due to cost and inflexibility in changing
hardware and software, which makes it difficult to quickly move on to new opportunities.
13
Logical Partitioning
Logical partitioning refers to the separation of execution environments within a computing system
using logic implemented through software. There are different ways in which the resources of a
computer system may be managed and shared. Logical partitioning includes software partitioning,
resource partitioning, and service partitioning technologies.
Software partitioning is a software-based technology that allows the resources of a single, physical
computer to be divided into multiple partitions (also called soft partitions or virtual machines),
each of which can host its own, isolated instance of an operating system. Software partitioning is
generally similar to hardware partitioning in that multiple instances of operating systems may
coexist on a single physical server. The major difference between hardware and software
partitioning is that in software partitioning, the isolation of each soft partition and the management
of the shared resources of the computer are completely handled by a special software layer called
a Virtual Machine Monitor (VMM) or Hypervisor. The VMM, as well as each operating system
within each soft partition, all consume computing resources from the same set of hardware, thus
software partitioning incurs overhead that does not exist in hardware partitioning.

The overhead produced by the VMM varies from each implementation of software partitioning
systems, but always has an impact on the performance of each soft partition. Depending on how
resources are managed by the VMM, it is conceivable that the computing resources consumed by
each soft partition could also impact the VMM's performance as well. Software partitioning
implementations exist on mid-range to high-end computing systems as well as commodity server
and workstation computers. Server virtualization (and the term virtualization) as described in this
book refers directly to server-based software partitioning systems, generically referred to as
virtualization platforms.
Software partitioning systems are generally implemented in one of two ways. They are either
hosted as an application in an existing operating system installed on a physical computer or they
are installed natively on a physical computer without an operating system. When software
partitioning systems are hosted in an existing operating system, they gain the advantages of
leveraging that operating system's resource management capabilities, its hardware compatibility,
and application programming interfaces (APIs). This allows the software partitioning system to be
smaller and potentially easier to write and support. This configuration also imposes the deficiencies
and inefficiencies of the host operating system upon the software partitioning system as well as
the additional resource consumption of the host operating system.

14
Fig No.12 Hardware Partitioning

Software Partitioning Hosted in an Existing Operating System

Hosted software partitioning systems generally have more overhead and less performance than
their native counterparts. Current server-based implementations of software partitioning systems
include Microsoft Virtual Server and VMware GSX Server for, Windows both of which run in a
Windows Server operating system and VMware GSX Server for Linux, which runs in a Linux
operating system.
Software partitioning systems installed natively onto "bare metal" (the physical computer
hardware) without an operating system are generally more efficient in their management of the
computer's resources. This is because the software partitioning system has full control over those
resources. Although this type of implementation is more difficult to write and support, it is not
burdened by theo verhead of another operating system, generally allowing more performance for
each soft partition. VMware ESX Server is currently the most mature implementation of a natively
installed software partitioning system for x86-based server architectures.

Fig No.13 Software Partitioning Hosted in an Existing Operating System


15
Software Partitioning Installed Natively on Hardware
A big advantage of software partitioning systems available for x86-based computers over hardware
partitioning is cost. These systems run on standardized, commodity server hardware, which is
much less expensive than the proprietary hardware partitioning systems. Because the hardware is
also standardized across the industry, most IT organizations have the necessary skills to properly
scope, deploy, configure, and administer the hardware today. This also lessens the implementation
and support costs of software partitioning versus hardware partitioning. Software partitioning
systems also offer the advantage of hardware portability. Each soft partition (or virtual machine)
is fully abstracted from the underlying hardware that allows the partition to be moved to any
physical computer that has the software partitioning system installed. An example being two
computers having dramatically different hardware, such as a dual-processor, rack-mounted server
and a laptop computer each running Microsoft Virtual Server. This is a powerful concept that
makes server virtualization platforms very powerful in their capabilities. The benefit is also
realized in terms of hardware upgrades because as long as the software partitioning system is
supported on newer hardware, each soft partition will run as expected and in most cases it is trivial
to move the soft partitions to another physical computer. The issue of vendor lock-in is also
avoided in regard to the physical hardware since software partitioning systems for x86-based
computer architectures can be installed on many different manufacturer's hardware (again due to
industry standardization).
Software partitioning systems use a combination of emulation, simulation, and pass-through in
their hardware abstraction methods. Each soft partition "sees" its own set of hardware resources
that it may consume. This leads to an interesting question: can a software- partitioning
virtualization platform be installed and used within a soft partition? Although this is theoretically
possible, it is highly impractical and unusable as a solution. In some cases, depending on the
specific virtualization platforms used, the virtualization platform may not even complete the
installation process. In other cases, operating systems installed in a soft partition of another soft
partition execute too slowly to be used effectively. This is most likely due to the multiplication of
overhead within the system. For instance, when using preemptive multi-tasking operating systems
for the host platform installed directly on the hardware and for the operating systems installed in
each soft partition, the physical effect of time-slicing the physical CPU between all of the processes
is multiplied for those processes executing within the first and second-level soft partitions. If more
than one soft partition is created in the virtualization platform installed in a soft partition, the effect
is worsened because the multiplier increases. It is generally a bad idea to embed entire software
partitioning systems within existing soft partitions.

Application partitioning is a software-based technology that allows the operating system resources
on which an application depends to be placed in an alternate container within the operating system
without the application's knowledge. These resources are said to be virtualized by the application
partitioning system. The isolated application can then be executed in multiple instances
simultaneously in the same operating system, by one or more users, without the application
16
instances interfering with one another. Each instance of the application has no knowledge that the
other instances exist and the application does not require any modifications to be hosted by the
application partitioning system.

Fig No.14 Software Partitioning Installed Natively on Hardware

Application Partitioning

The primary advantage of an application partitioning system is that any application, regardless if
it was designed to be used by a single user or multiple users, can be centrally managed and made
available in a distributed fashion. A single server can execute many instances of the application
and each application instance state is written into a separate container. Each container is
automatically handled by the application partitioning system. Application partitioning can
consolidate a single application from multiple desktop computers and servers onto a single server
and the application can be managed much like a single instance of the application. The operating
system itself is not completely abstracted from the application, only certain subcomponents such
as data storage facilities (file systems), therefore only applications normally run on the operating
system being used are allowed to be hosted under the application partitioning system.
Resource partitioning is a software-based technology that abstracts how certain operating system
resources are allocated to application instances or individual processes executing within the
operating system. This technology is used to control resource consumption of applications and
processes, allowing more granular control than what is provided by the operating system. Resource
partitioning systems also allow the resource consumption to be controlled not only at the
application or process level, but also by the combination of application or process and user account.
Resource partitioning systems enable the operating system to become Quality-of-Service enabled.

17
Fig No.15 Application Partitioning

Resource Partitioning
Application instances or processes can be given parameters, which allow certain minimum and
maximum levels of resource utilization such as CPU, memory, and disk I/O to be effectively
controlled and managed. Just as in application partitioning, resource partitioning does not abstract
the entire operating system from an application, therefore only applications that would normally
run on the operating system being used are allowed to be controlled by the resource partitioning
system.
Service partitioning is a software-based technology in which a single application instance provides
multiple, isolated instances of a service. Each instance of the service usually appears to consumers
of that service to be a dedicated application instance (and often a dedicated server instance).
Abstraction between the service application and the operating system is not required. The
abstraction occurs on top of the application instance, which allows multiple instances of the
application's service to coexist. The level of isolation between service instances can vary greatly
between implementations of service partitioning, and in some systems can even be controlled,
providing complete isolation at one extreme and no isolation at the other extreme of the isolation
configuration settings.

18
Fig No.16 Resource Partitioning

Service Partitioning
Common examples of service partitioning systems include database and web server applications.
In a database server application, a single instance of the database server executes within the
operating system. The primary service of the database server is to provide database access.
Database servers can typically provide multiple databases per server instance. Each database can
be configured to appear to a consumer to be the only database on the server, when in reality there
may be 20 databases being concurrently accessed by 500 users. Most modern Web server
applications allow multiple "virtual" Web sites to be created and hosted simultaneously from a
single instance of the application. Each Web site is isolated from the other and, from a Web surfer's
point-of-view, each Web site appears as if it is hosted on its own dedicated server, when in reality,
there may be 100 Web sites running concurrently from the single Web server application instance.
Types of Sever virtualization
Hypervisor
Para Virtualization
Full virtualization
Hardware Assisted Virtualization
Kernel level Virtualization
System Level or OS virtualization
Business cases for Sever virtualization
Windows Server virtualization, the deployment of a virtual version of a Windows-Server
operating environment, is used to reduce hardware costs, gain efficiencies, and improve the
availability of computing resources. It refers to installing a virtual environment onto one or more
rs (termed Physical Hosts) and deploying multiple virtual Windows
Server operating systems (termed Virtual Guests) onto this virtual environment.

In small to medium-sized businesses, we typically see three levels of Windows Server


virtualization with these increasing benefits:

Single Physical Host Cost savings (energy and hardware) with some flexibility

Multiple hosts with Storage Area Network (SAN) Highly available environment with minimal
downtime

Multiple hosts with Site-to-Site Failover Disaster recovery to separate location We review each

of these levels below.

19
Single Physical Host

This virtualization level has these components:

Single hardware server with onboard storage This hardware server is the platform
for the Physical Host; it could be a HP ML350/ML370 tower server or equivalent with multiple
disk drives.

Virtualizing software The operating environment for virtualization; typically the


-V. (These products are available
as free downloads from the manufacturer.) Installing the virtualizing software onto the hardware
server creates the Physical Host.

Multiple Virtual Guests The virtual operating systems installed onto the Physical
Host; usually one or more ins
be licensed copies of Windows Server and any associated, server-based applications.)

This environment consolidates several Windows Server instances onto a single hardware server
with sufficient processing capability, Random Access Memory (RAM), and on-board disk storage.
It introduces cost savings in hardware, energy, and support and provides some flexibility in the
transfer of a virtualized instance to a new hardware platform (although this transfer is manual and
requires a second hardware server).

Primary business benefits:

Less up-front acquisition cost (capital expenditure or CapEx) since a single hardware
server can be used rather than two or more hardware servers. Plus, the virtualizing software at this
level is basically free.

Less energy required to power a single hardware server than multiple hardware
servers; leads to reduced operating expenses (OpEx).

Fewer components to support; could lead to lower support costs.

Increased flexibility and scalability when migrating to a new hardware server.

This virtualizing environment works well in a business with a couple of Windows Servers that is
looking to capital and operating reduce costs.

Multiple Physical Hosts with a Storage Area Network

At this level, we separate the storage (disk-drives) from the Physical Host and move them to a
separate Storage Area Network (SAN). We also add sophisticated virtualizing software capable of
automatically managing the location of Virtual Guests.

20
all critical components within the equipment stack such that any single component can fail without
compromising system reliability.

Improved performance is also likely since the virtualizing software can automatically balance
available resources against Virtual Guest needs.

This virtualization level has these primary hardware components:

Storage Area Network (SAN), preferably with redundant disk chassis and network switching2

Two or more Physical Hosts, preferably with N+1 redundancy3

Two or more VLAN-capable Ethernet switches4 Each item is a critical of the overall design:
All data and Virtual Guests reside on the SAN

Virtual Guests are balanced among the Physical Hosts

Ethernet switches route all the traffic between the SAN and the Physical Hosts

If any item fails, the system fails. So, each item must be redundant (to increase reliability) and
must be properly maintained. Multiple Hosts with Site-to-Site Failover Our highest level of
Windows Server virtualization, Multiple Hosts with SiteSite-to-Site Failover, addresses the issue of a
single-site failure; how long does it take to recover to a new location if your primary site fails (as
in a building catastrophe such as long-term power outage, flooding, fire, theft, etc.).

Like most data-center-uptime strategies, redundancy is the core concept; in this case, a second site
is equipped with comparable equipment and the data is synchronized between the primary and
secondary site. Done properly, the secondary site can be brought up either automatically or, when
budget is a constraint, within a short interval of an hour or less.

Configuring for automatic failover can be considerably more expensive than allowing a short
interval of an hour or less to recover since you essentially need to duplicate the primary site at the
remote location, have sufficient bandwidth between the locations to permit real-time replication,
and deploy some additional equipment and software to manage the automatic failover.

While automatic failover is feasible, we structure the failover interval (automatic or short) to meet

When configuring for automatic failover, several items must be adjusted:

P4500 SANs must be deployed at the primary and remote site(s) and must be configured in a multi-
site cluster

VMware vSphere Enterprise or better is required and must be licensed for both the primary and
21

You might also like