0% found this document useful (0 votes)
5 views6 pages

Chapter 6

Chapter 6 discusses virtualization, a key technology underpinning cloud computing, detailing its concepts, characteristics, types, and advantages. It covers the roles of hypervisors, virtual machines, and various virtualization methods such as full virtualization, paravirtualization, and hardware-assisted virtualization. Additionally, it introduces mainstream virtualization technologies like Xen and KVM, highlighting their architectures and functionalities.

Uploaded by

Yassine Aydi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views6 pages

Chapter 6

Chapter 6 discusses virtualization, a key technology underpinning cloud computing, detailing its concepts, characteristics, types, and advantages. It covers the roles of hypervisors, virtual machines, and various virtualization methods such as full virtualization, paravirtualization, and hardware-assisted virtualization. Additionally, it introduces mainstream virtualization technologies like Xen and KVM, highlighting their architectures and functionalities.

Uploaded by

Yassine Aydi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Chapter 6: Virtualization

This chapter introduces the fundamental concepts, important characteristics, and


mainstream technologies of virtualization, which serves as the bedrock of cloud computing.

 6.1 Overview

o 6.1.1.1 What Is Virtualization? (Previously described in section 1.2.2.1, so not


detailed again here).

o 6.1.1.2 Important Concepts of Virtualization:

 Hypervisor (Virtual Machine Monitor / VMM): A software layer


between physical hardware and operating systems (OSs) that enables
multiple OSs and applications to share hardware resources.

 Virtual Machine (VM) / Guest Machine: A virtualized instance created


from shared resources, on which a guest OS is installed.

 Guest OS: The operating system running inside a VM.

o 6.1.1.3 Virtualization Types:

 Categorized into full virtualization, paravirtualization, and hardware-


assisted virtualization. (Details explained in CPU Virtualization
section).

o 6.1.1.4 Virtualization Characteristics:

 Partitioning: The virtualization layer allocates physical server resources


to multiple VMs, each with independent (and potentially different) OSs
and virtual hardware (NICs, CPUs, memory). This allows running
diverse applications and prevents resource overuse by isolating
resource quotas.

 Isolation: VMs are logically isolated from each other. A crash, failure,
or virus infection in one VM does not affect others, akin to running on
separate physical machines. This also provides performance isolation
by setting minimum/maximum resource limits for each VM.

 Encapsulation: A VM's entire state (hardware config, BIOS, memory,


disk, CPU) is stored in a group of files, making it independent of
physical hardware. This enables easy copying, saving, and moving of
VMs (e.g., migration, hot swap) by simply copying these files.

 Hardware Independence: Due to encapsulation, VMs are detached


from specific physical hardware. VMs can be migrated between hosts
as long as the same VMM exists, regardless of underlying hardware
specifications.
o 6.1.1.5 Advantages of Virtualization:

 Virtualization overcomes limitations of physical servers, such as OS


binding, difficult service migration, unreliable stability, hard resource
scaling, low resource utilization, and high housing/maintenance costs.

 The hypervisor decouples OSs from physical servers, allowing easy


service migration, expansion, and resource integration. Standardized
virtual hardware (as files) also simplifies security management.

o 6.1.1.6 CPU Virtualization:

 CPU Hierarchical Protection Domain: CPUs operate with privilege


levels (Rings), from Ring 0 (most privileged, OS/drivers) to Ring 3 (least
privileged, applications). Dangerous instructions are restricted to Ring
0.

 Privileged vs. Common Instructions: Privileged instructions operate


key system resources (Ring 0 only); common instructions can run at
non-privileged levels (Ring 3).

 Sensitive Instructions: A special type of instruction that changes VM


operating mode or host state, handled by the VMM when a privileged
instruction from a guest OS is deprived of its privilege.

 Classic Virtualization (Mainframes): Pioneered on IBM mainframes


(RISC/PowerPC architecture). It uses privilege deprivileging (guest OS
runs at non-privileged level) and trap-and-emulate (VMM traps
privileged instructions, emulates them, and sends to physical CPU).
This works for RISC because sensitive instructions are a subset of
privileged ones.

 x86 Architecture Challenge (Virtualization Vulnerability): Unlike RISC,


x86 (CISC architecture) has 19 sensitive instructions that are not
privileged instructions, meaning they can run in user mode (e.g., Ring
1) and cannot be reliably trapped and emulated by a VMM using
classic methods. This "virtualization vulnerability" prevented direct
transplantation of mainframe virtualization to x86.

 Solutions for x86 CPU Virtualization:

 Full Virtualization: (e.g., VMware) Uses binary translation by


the VMM to filter all guest OS requests. Privileged/sensitive
instructions are trapped and emulated; common instructions
run directly. This provides high portability and compatibility
without modifying the guest OS, but incurs significant
performance overhead due to real-time binary translation.

 Paravirtualization: (e.g., Xen early versions) Modifies the guest


OS to be "virtualization-aware." The guest OS uses Hypercalls
to explicitly interact with the hypervisor for sensitive
operations, avoiding the virtualization vulnerability. Offers
near-native performance but requires guest OS modification
(limiting support mainly to open-source OSs like Linux) and
reduces portability.

 Hardware-assisted Virtualization: (e.g., Intel VT-x, AMD-V)


Modern CPUs include specific hardware features that add a
new execution mode (root mode) for the VMM. This allows the
VMM to run at Ring 0, directly handling privileged and sensitive
instructions automatically, significantly simplifying VMM
complexity and boosting performance without requiring guest
OS modifications or binary translation.

 6.1.1.6.1 Mappings Between CPUs and vCPUs:

 Virtual CPUs (vCPUs) are logical representations of physical CPU


resources (cores, threads).

 The total vCPU capacity is calculated by physical CPUs x cores


per CPU x threads per core.

 Multiple VMs can share the same physical CPU, allowing the
total number of vCPUs assigned to VMs on a computing node
(CNA) to exceed the actual physical vCPU count
(oversubscription).

o 6.1.1.7 Memory Virtualization:

 Necessity: With multiple VMs sharing a physical host, memory


resources need proper allocation, addressing issues like multiple
memory spaces needing to start at physical address 0 and contiguous
allocation being inefficient.

 Solution: VMM centrally manages and divides physical memory.


Memory virtualization involves three address types:

 Guest Virtual Addresses (GVAs): Used by applications within


the guest OS.

 Guest Physical Addresses (GPAs): The "physical" addresses


perceived by the guest OS.
 Host Physical Addresses (HPAs): The actual physical addresses
on the host machine.

 Translation: The guest OS maps GVAs to GPAs, and the hypervisor


(VMM) maps GPAs to HPAs, as the guest OS cannot directly access host
memory.

o 6.1.1.8 I/O Virtualization:

 Necessity: Allows multiple VMs to share limited physical I/O devices


(e.g., NICs, storage controllers) by intercepting VM I/O requests and
simulating devices.

 Modes:

 Full Virtualization (I/O): VMM intercepts and emulates all I/O


requests. No guest OS modification required, but incurs
significant performance loss due to software-based real-time
monitoring and emulation.

 Paravirtualization (I/O): Requires guest VMs to run frontend


drivers that send I/O requests to a privileged VM (with backend
drivers). The privileged VM then accesses the physical I/O
device. This reduces VMM performance loss but requires guest
OS modification (typically for open-source OSs like Linux).

 Hardware-assisted Virtualization (I/O): (Mainstream) I/O


device drivers are installed directly on the VM's guest OS. With
special hardware support, VMs can access I/O devices almost
as efficiently as a native host OS, providing much higher
performance than full or paravirtualization.

 6.1.2 Mainstream Virtualization Technologies

o 6.1.2.1 Xen Virtualization:

 Origin: Open-source project from Cambridge University, now


promoted by Xen Project (acquired by Citrix).

 Support: Supports x86/x86_64, IA64, and ARM CPU architectures.

 Types:

 Paravirtualization (PV): Requires specific Linux kernels


(paravirt_ops); no hardware-assisted virtualization needed;
does not support Windows. Applicable for older servers.

 Hardware Virtual Machine (HVM): Supports native OSs


(especially Windows); requires hardware-assisted
virtualization. Uses QEMU for emulated hardware. Improves
I/O performance with "PV on HVM" (paravirtualized drivers
replacing emulated ones), which requires MMU hardware-
assisted virtualization.

 Architecture: Xen hypervisor runs directly on hardware (microkernel-


like, <150,000 lines of code). Domain 0 (Dom0) is a privileged VM that
manages other VMs and accesses native device drivers. Guest OSs in
early Xen needed modification to interact with Dom0 for device
access.

o 6.1.2.2 KVM Virtualization:

 Origin: Open-source project by Qumranet (acquired by Red Hat).

 Nature: Kernel-based Virtual Machine; it's a Linux kernel module that


transforms a Linux machine into a hypervisor when loaded, without
affecting other Linux applications.

 Support: Supports x86/x86_64, mainframes, midrange, and ARM


architectures.

 Advantages: Makes full use of hardware-assisted virtualization and


reuses Linux kernel functions, resulting in low resource consumption.

 Working Modes: The converted Linux kernel manages and schedules


host processes (user mode), VMs (guest mode), and its own kernel
mode (processing VM-Exit and I/O requests).

o 6.1.2.3 KVM and QEMU:

 KVM's Role: Primarily virtualizes CPU and memory.

 QEMU's Role: Software-based open-source emulator that virtualizes


I/O devices (USB, NIC, etc.) and provides a complete system emulation.

 Synergy: KVM offloads CPU-intensive CPU/memory virtualization to


the kernel for performance, while QEMU handles I/O virtualization in
user space. QEMU uses KVM to accelerate its VMs.

 Communication: The /dev/kvm device file acts as the bridge, with ioctl
system calls facilitating interaction between QEMU (user space) and
KVM (kernel space).

o 6.1.2.4 Working Principles of KVM:

 KVM is a Linux kernel module (/dev/kvm). QEMU, via the Libkvm API,
sends VM creation and execution commands to the KVM driver.
 KVM adds a guest mode for VMs. The workflow involves VMs in guest
mode executing non-I/O code, QEMU in user mode handling I/O
instructions, and the KVM module in kernel mode switching to guest
mode or processing VM-Exit for I/O/privileged instructions.

o 6.1.2.5 Virtualization Platform Management Tool - Libvirt:

 Purpose: An open-source API, daemon, and management tool that


provides a unified interface for managing various virtualization
technologies (KVM, Xen) across different vendors.

 Functionality: Manages virtual clients, networks, and storage.

 Architecture: Uses a driver-based architecture, with specific drivers


loaded for different hypervisors, abstracting hypervisor details and
providing a stable API to management tools.

o 6.1.2.6 Xen vs. KVM:

 "In general, the Xen platform and KVM platform has their own
advantages. The Xen architecture focuses on security. To ensure
security, the access of domains to the shared zone between VMs or
between VMs and the host kernel do not need to be authorized by
the hypervisor, so the access path is short. No performance loss
occurs due to the usage of the Linux Baremetal kernel

You might also like