Server Virtualization Insights
Server Virtualization Insights
The Ultimate Guide. Select 2 of the main sections from the article:
In a 2-3 page APA-formatted paper, summarize what you have learned from the 2
article sections about server virtualization that you selected to read. Present a
brief business use case scenario in which server virtualization would be
appropriate and address the pros and cons of deployment. give answer from this
in simple words so that I can later use ai bypasser to bypass ai content from it
Search the TechTarget Network
Login Register
TechTarget Network Software Quality App Architecture Cloud Computing AWS
TheServerSide Data Center
Search
IT
Operations
Automation & Orchestration
Careers & Skills
Containers & Virtualization
DevOps Tech & Culture
IT Systems Management
News Features Tips Webinars 2024 IT Salary Survey Results Sponsored Sites
More
Follow:
Home Containers and virtualization
Tech Accelerator
Introduction
Why is server virtualization important?
How does server virtualization work?
What are the benefits of server virtualization?
What are the disadvantages of server virtualization?
Use cases and applications
What are the types of server virtualization?
Migration and deployment best practices
Server virtualization management
Vendors and products
What's the future of server virtualization?
DEFINITION
What is server virtualization? The ultimate guide
By
Stephen J. Bigelow, Senior Technology Editor Alexander S. Gillis, Technical Writer
and Editor
Server virtualization is a process that creates and abstracts multiple virtual
instances on a single server. Server virtualization also abstracts or masks server
resources, including the number and identity of individual physical machines,
processors and different operating systems.
The advent of server virtualization changed all this. Virtualization adds a layer
of software, called a hypervisor, to a computer, which abstracts the underlying
hardware from all the software that runs above. Virtualization translates physical
resources into virtual -- logical -- equivalents. The hypervisor then organizes and
manages the computer's virtualized resources, provisioning those virtualized
resources into logical instances called virtual machines (VMs), each capable of
functioning as a separate and independent server.
Virtualization has changed the face of enterprise computing, but its many benefits
are sometimes tempered by factors such as licensing and management complexity, as
well as potential availability and downtime issues. Organizations must understand
what virtualization is, how it works, its tradeoffs and use cases. Only then can an
organization adopt and deploy virtualization effectively across the data center.
Virtualization isn't a new idea. The technology first appeared in the 1960s during
the early era of computer mainframes as a means of supporting mainframe time-
sharing, which divides the mainframe's considerable hardware resources to run
multiple workloads simultaneously. Virtualization was an ideal and essential fit
for mainframes because the substantial cost and complexity of mainframes limited
them to just one deployed system -- organizations had to get the most utilization
from the investment.
The advent of x86 computing architectures brought readily available, simple, low-
cost computing devices into the 1980s. Organizations moved away from mainframes and
embraced individual computer systems to host or serve each enterprise application
to growing numbers of user or client endpoint computers. Because individual x86-
type computers were simple and limited in processing, memory and storage capacity,
the x86 computer and its operating systems (OSes) were typically only capable of
supporting a single application. One big, shared computer was replaced by many
little cheap computers. Virtualization was no longer necessary, and its use faded
into history along with mainframes.
But two factors emerged that drove the return of virtualization technology to the
modern enterprise. First, computer hardware evolved quickly and dramatically. By
the early 2000s, typical enterprise-class servers routinely provided multiple
processors and far more memory and storage than most enterprise applications could
realistically use. This resulted in wasted resources -- and wasted capital
investment -- as excess computing capacity on each server went unused. It was
common to find an enterprise server utilizing only 15% to 25% of its available
resources.
The second factor was a hard limit on facilities. Organizations simply procured and
deployed additional servers as more workloads were added to the enterprise
application repertoire. Over time, the sheer number of servers in operation could
threaten to overwhelm a data center's physical space, cooling capacity and power
availability. The early 2000s experienced major concerns with energy availability,
distribution and costs. The trend of spiraling server counts and wasted resources
was unsustainable.
Today's virtualization platforms embrace the same functional ideas as their early
mainframe counterpart. Virtualization abstracts software from the underlying
hardware, enabling virtualization to provision and manage virtualized resources as
isolated and independent logical instances -- effectively turning one physical
server into multiple virtual servers, each capable of operating independently to
support multiple applications running on the same physical computer at the same
time.
The importance of server virtualization has been profound because it addresses the
two problems that plagued enterprise computing into the 21st century.
Virtualization lowers the physical server count, enabling an organization to reduce
the number of physical servers in the data center -- or run vastly more workloads
without adding servers. It's a technique called server consolidation. The lower
server count also conserves data center space, power and cooling; this can often
forestall or even eliminate the need to build new data center facilities. In
addition, virtualization platforms routinely provide powerful capabilities such as
centralized VM management, VM migration -- enabling a VM to easily move from one
system to another -- and workload/data protection through backups and snapshots.
Newer and more resource-rich computers can host a larger number of VMs, while older
systems or those with compute-intensive workloads might host fewer VMs. It's
possible for the hypervisor to assign resources to more than one VM -- a practice
called overcommitment -- but this is discouraged because of computing performance
penalties incurred, as the system must time-share any overcommitted resources. The
ready availability of powerful new computers also makes overcommitment all but
unnecessary because the penalties of overcommitment far outweigh the benefits of
squeezing another VM onto a physical system. It's easier and better to just
provision the additional VM on another system where resources are available.
Risk and availability. Running multiple workloads on the same physical computer
carries risks for the organization. Before the advent of virtualization, a server
failure only affected the associated workload. With virtualization, a server
failure can affect multiple workloads, potentially causing greater disruption to
the organization, its employees, partners and customers. IT leaders must consider
issues such as workload distribution -- which VMs should reside on which physical
servers -- and implement recovery and resiliency techniques to ensure critical VMs
are available in the aftermath of server or other physical infrastructure faults.
VM sprawl. IT resources depend on careful management to track the availability,
utilization, health and performance of resources. Knowing what's present, how it's
used and how it's working are keys to data center efficiency. A persistent
challenge with virtualization and VMs is the creation and eventual -- though
sometimes unintended -- abandonment of VMs. Unused or unneeded VMs continue to
consume valuable server resources but only do a little valuable work; meanwhile,
those resources aren't available to other VMs. Over time, VMs proliferate, and the
organization runs short of resources, forcing it to make unplanned investments in
additional capacity. The phenomenon is called VM sprawl or virtual server sprawl.
Unneeded VMs must be identified and decommissioned so that resources are freed for
reuse. Proper workload lifecycle management and IT resource management will help to
mitigate sprawl issues, but it takes effort and discipline to address sprawl.
Resource shortages. Virtualization makes it possible to exceed normal server
resource utilization, primarily in memory and networking. For example, VMs can
share the same physical memory space, relying on conventional page swap --
temporarily moving memory pages to a hard disk so the memory space can be used by
another application. Virtualization can assign more memory than the server has;
this is called memory overcommitment. Overcommitment is undesirable because the
additional latency of disk access can slow the VM's performance. Network bandwidth
can also become a bottleneck as multiple VMs on the same server compete for network
access. Both issues can be addressed by upgrading the host server or by
redistributing VMs between servers.
Licensing. Software costs money in procurement and licensing, which can easily be
overlooked. Hypervisors and associated virtualization-capable management tools
impose additional costs on the organization, and hypervisor licensing must be
carefully monitored to observe the terms and conditions of the software's licensing
agreements. License violations can carry litigation and significant financial
penalties for the offending organization. In addition, bare-metal VMs require
independent OSes, requiring licenses for each OS deployment.
Experience. Successful implementation and management of a virtualized environment
depends on the expertise of IT staff. Education and experience are essential to
ensure that resources are provisioned efficiently and securely, monitored and
recovered in a timely manner, and protected appropriately to ensure each workload's
continued availability. Business policies play an important role in resource use,
helping to define how new VMs are requested, approved, provisioned and managed
throughout the VM's lifecycle. Fortunately, virtualization is a mature and widely
adopted technology today, so there are ample opportunities for education and
mentoring in hypervisors and virtualization management.
Server virtualization drawbacks
Server virtualization disadvantages include implementation and licensing costs,
virtual server sprawl, security concerns and resource contention.
Use cases and applications
Virtualization has proven to be a reliable and versatile technology that has
permeated much of the data center in the last two decades. Yet organizations might
continue to face important questions about suitable use cases and applications for
virtualization deployment. Today, server virtualization can be applied across a
vast spectrum of enterprise use cases, projects and business objectives, including
the following:
The hypervisor is responsible for abstracting and managing the host computer's
resources, such as processors and memory, and then providing those abstracted
resources to one or more VM instances. Each VM exists as a guest atop the
hypervisor. Guest VMs are completely logically isolated from the hypervisor and
other VMs. Each VM requires its own guest OS, enabling organizations to employ
varied OS versions on the same physical computer.
Paravirtualization
Early bare-metal hypervisors faced performance limitations. Paravirtualization
emerged to address those early performance issues by modifying the host OS to
recognize and interoperate with a hypervisor through commands called hypercalls.
Once successfully modified, the virtualized computer could create and manage guest
VMs. OSes installed in guest VMs could employ varied and unmodified OSes and
unmodified applications.
The principal challenge of paravirtualization is the need for a host OS -- and the
need to modify that host OS -- to support virtualization. Unmodified proprietary
OSes, such as Microsoft Windows, won't support a paravirtualized environment, and a
paravirtualized hypervisor, such as Xen, requires support and drivers built into
the Linux kernel. This poses considerable risk for OS updates and changes. An
organization shifting from one OS to another might risk losing paravirtualization
support. The popularity of paravirtualization quickly waned as computer hardware
evolved to support VMM-based virtualization directly, such as introducing
virtualization extensions to the processors' command set.
Hosted virtualization potentially makes guest VMs far more resource efficient
because VMs share a common OS -- the OS need not be duplicated for every VM.
Consequently, hosted virtualization can potentially support hundreds, even
thousands, of VM instances on the same system. However, the common OS offers a
single vector for failure or attack: If the host OS is compromised, all the VMs
running atop the hypervisor are potentially compromised too.
The efficiency of hosted VMs has spawned the development of containers. The basic
concept of containers is identical to hosted virtualization where a hypervisor is
installed atop a host OS, and virtual instances all share the same OS. But the
hypervisor layer -- for example, Docker and Apache Mesos -- is tailored
specifically for high volumes of small, efficient VMs intended to share common
components or dependencies such as binaries and libraries. Containers have found
significant growth with microservice-based software architectures where agile,
highly scalable components are deployed and removed from the environment quickly.
Such hypervisor migrations aren't quick or easy. The decision to change hypervisors
and migrate VMs from one hypervisor to another should be carefully tested and
validated well in advance of any actual migration initiative.
Have a plan. Don't adopt virtualization for its own sake. Server virtualization
offers some significant benefits, but there are also costs and complexities to
consider. An organization planning to adopt virtualization for the first time
should have a clear understanding of why and where the technology fits in a
business plan. Similarly, organizations that already virtualize parts of the
environment should understand why and how expanding the role of virtualization will
benefit the business. The answer might be as obvious as a server consolidation
project to save money, or a vehicle to support active software development projects
outside of the production environment. Regardless of the drivers, have a plan
before going into a virtualization initiative.
Assess the hardware. Get a sense of scope. Virtualization software, both
hypervisors and management tools, must be purchased and maintained. Understand the
number of systems as well as the applications that must be virtualized and
investigate the infrastructure to verify that the hardware should support
virtualization. Almost all current data center hardware is suited for
virtualization but perform the due diligence upfront to avoid discovering an
incompatibility or inadequate hardware during an installation.
Test and learn. Any new virtualization rollout is typically preceded by a period of
testing and experimentation, especially when the technology is new to the
organization and IT team. IT teams should have a thorough working knowledge of a
virtualization platform before it's deployed and used in a production setting. Even
when virtualization is already present, the move to virtualize new workloads --
especially mission-critical workloads -- should involve detailed proof-of-principle
projects to learn the tools and validate the process. Smaller organizations can
turn to service providers and consultants for help if necessary.
Focus on the business. Virtualization should be deployed and used according to the
needs of the business, including a careful consideration of security, regulatory
compliance, business continuance, disaster recovery and VM lifecycles --
provisioning, using and then later recovering resources. IT management tools should
support virtualization and map appropriately against all those business
considerations.
Start small and build out. Organizations new to server virtualization should follow
a period of testing and experimentation with small, noncritical virtualization
deployments, such as test and development servers. Seek the small and quick wins to
gain experience, learn troubleshooting and demonstrate the value of virtualization
while minimizing risk. Once a body of expertise is available, the organization can
plan and execute more complex virtualization projects.
Adopt guidelines. As the organization embraces server virtualization, it's
appropriate to create and adopt guidelines around VM provisioning, monitoring and
lifecycles. Computing resources cost money. Guidelines can help codify the
processes and practices that enable an organization to manage those costs, avoid
resource waste by preventing overprovisioning and VM sprawl and maintain consistent
behaviors that tie back to security and compliance issues. Guidelines should be
periodically reviewed and updated over time.
Select a tool. Virtualization management tools usually aren't the first
consideration in an organization's virtualization strategy. Virtualization
platforms typically include basic tools, and it's good practice to get comfortable
with those tools in the early stages of virtualization adoption. Eventually, the
organizations might find benefits in adopting more comprehensive and powerful tools
that support large and sophisticated virtualization environments. By then, the
organization and IT staff will have a clear picture of the features and
functionality required from a tool, why those features are needed and how these
features will benefit the organization. Server virtualization management tools are
selected based on a wide range of criteria, including licensing costs, cross-
platform compatibility supporting multiple hypervisors from multiple vendors,
support for templates and automation, direct control over VMs and storage, and even
the potential for self-service and chargebacks -- enabling other departments or
users to provision VMs and receive billing if desired. Organizations can choose
from many server virtualization monitoring tools that vary in features, complexity,
compatibility and cost. Virtualization vendors typically provide tools intended for
the vendor's specific hypervisors. For example, Microsoft System Center supports
Hyper-V, while vCenter Server is suited for VMware hypervisors. But organizations
can also opt for third-party tools, including ManageEngine Applications Manager,
SolarWinds Virtualization Manager and Veeam One.
Support automation. Virtualization lends itself to automation and orchestration
techniques that can speed common provisioning and management tasks while ensuring
consistent execution, minimizing errors, mitigating security risks and bolstering
compliance. Generally, tools support automation, but it takes human experience and
insight to codify established practices and processes into suitable automation. The
adoption of virtual containers closely depends on automation and orchestration --
and uses well-designed tools such as Kubernetes -- to manage a containerized
environment.
Vendors and products
There are numerous virtualization offerings in the current marketplace, but the
choice of vendors and products often depends heavily on virtualization goals and
established IT infrastructures. Organizations that need bare-metal -- Type 1 --
hypervisors for production workloads can typically select from VMware vSphere,
Microsoft Hyper-V, Citrix Hypervisor, IBM Red Hat Enterprise Virtualization (RHEV)
and Oracle VM Server for x86. VMware dominates the current virtualization landscape
for its rich feature set and versatility. Microsoft Hyper-V is a common choice for
organizations that already standardize on Microsoft Windows Server platforms. RHEV
is commonly employed in Linux environments.
But some products are also designed for advanced mission-specific tasks. When
comparing vSphere ESXi to Nutanix, Nutanix AHV brings hyperconverged infrastructure
(HCI), software-defined storage and its Prism management platform to enterprise
virtualization. However, AHV is intended for HCI only; organizations that need more
general-purpose virtualization and tools might turn to the more mature VMware
platform instead.
Organizations can also choose between Xen -- commercially called Citrix Hypervisor
-- and Linux KVM hypervisors. Both can run multiple OSes simultaneously, providing
network flexibility, but the decision often depends on the underlying
infrastructure and any cloud interest. Today, Amazon is reducing support for Xen
and opting for KVM, and this can influence the choice of hypervisor for
organizations worried about the integration of virtualization software with any
prospective cloud provider.
The choice of any hypervisor should only be made after an extended period of
evaluation, testing and experimentation. IT and business leaders should have a
clear understanding of the compatibilities, performance and technical nuances of a
preferred hypervisor, as well as a thorough picture of the costs and license
implications of the hypervisor and management tools.
Server virtualization has come a long way in the last two decades. Today, server
virtualization is viewed largely as a commodity. It's table stakes -- a commonly
used, almost mandatory, element of any modern enterprise IT infrastructure.
Hypervisors have also become commodity products with little new or innovative
functionality to distinguish competitors in the marketplace. The future of server
virtualization isn't a matter of hypervisors, but rather how server virtualization
can support vital business initiatives.
Consider the burgeoning influence of containers. VMs and containers are two
different types of virtualization, handled by two different types of hypervisors --
yet the VMs and containers can certainly operate side-by-side in a data center to
handle different types of enterprise workloads.
Second, the continued influence and evolution of technologies such as HCI will test
the limits of virtualization management. For example, recent trends toward
disaggregation or HCI 2.0 work by separating computing and storage resources, and
virtualization tools must efficiently organize those disaggregated resources into
pools and tiers, provision those resources to workloads and monitor those
distributed resources accurately.
The continued threats of security breaches and malicious attacks will further the
need for logging, analytics and reporting, change management and automation. These
factors will drive the evolution of server virtualization management tools --
though not the hypervisor itself -- and improve visibility into the environment for
business insights and analytics.
Stephen J. Bigelow, senior technology editor at TechTarget, has more than 20 years
of technical writing experience in the PC and technology industry.
Privacy Policy
Do Not Sell or Share My Personal Information
This website is owned and operated by Informa TechTarget, part of a global network
that informs, influences and connects the world’s technology buyers and sellers.
All copyright resides with them. Informa PLC’s registered office is 5 Howick Place,
London SW1P 1WG. Registered in England and Wales. TechTarget, Inc.’s registered
office is 275 Grove St. Newton, MA 02466.