Unit 5
Unit 5
UNIT 5
VIRTUAL MACHINES AND MOBILE OS
Virtual Machines – History, Benefits and Features, Building Blocks, Types of Virtual
Machines and their Implementations, Virtualization and Operating-System Components;
Mobile OS - iOS and Android.
Fig 5.1: System Models (a) Nonvirtual Machine (b) Virtual Machine
Hardware-based solutions that provide support for virtual machine creation and
management via firmware.
Type 0 hypervisors
These VMMs, which are commonly found in mainframe and large to midsized servers,
are generally known as type 0 hypervisors.
IBM LPARs and Oracle LDOMs are examples
Type 1 hypervisors.
Type 2 hypervisors
Applications that run on standard operating systems but provide VMM features to guest
operating systems.
These applications, which include VMware Workstation and Fusion, Parallels Desktop,
and Oracle VirtualBox, are type 2 hypervisors.
Paravirtualization
Programming-environment virtualization
Emulators
Emulators that allow applications written for one hardware environment to run on a very
different hardware environment, such as a different type of CPU.
Application Containment
5.1.2 HISTORY
❖ Suppose that the physical machine had three disk drives but wanted to support seven
virtual machines. Clearly, it could not allocate a disk drive to each virtual machine.
Fidelity.
AVMM provides an environment for programs that is essentially identical to the
original machine.
Performance.
Programs running within that environment show only minor performance
decreases.
Safety.
The VMM is in complete control of system resources.
By the late 1990s, Intel 80x86 CPUs had become common, fast, and rich in features.
Both Xen and VMware created technologies, still used today, to allow guest operating
systems to run on the 80x86.
Virtualization has expanded to include all common CPUs, many commercial and open
source tools, and many operating systems.
One important advantage of virtualization is that the host system is protected from the
virtual machines, just as the virtual machines are protected from each other.
A virus inside a guest operating system might damage that operating system but is
unlikely to affect the host or the other guests.
Since each virtual machine is almost completely isolated from all other virtual
machines, there are almost no protection problems.
A potential disadvantage of isolation is that it can prevent sharing of resources.
Two approaches to providing sharing have been implemented.
In addition, multiple versions of a program can run, each in its own isolated operating
system, within one system.
A major advantage of virtual machines in production data-centre use is system
consolidation, which involves taking two or more separate systems and running them in
virtual machines on one system. Such physical-to-virtual conversions result in resource
optimization, since many lightly used systems can be combined to create one more
heavily used system.
A virtual environment might include 100 physical servers, each running 20 virtual
servers. Without virtualization, 2,000 servers would require several system administrators.
With virtualization and its tools, the same work can be managed by one or two
administrators
.
1. One of the tools that make this possible is templating, in which one
standard virtual machine image, including an installed and configured guest
operating system and applications, is saved and used as a source for multiple
running VMs.
2. Other features include managing the patching of all guests, backing up and
restoring the guests, and monitoring their resource use.
Virtualization can improve not only resource utilization but also resource management.
Some VMMs include a live migration feature that moves a running guest from one
physical server to another without interrupting its operation or active network connections.
Virtualization has laid the foundation for many other advances in computer facility
implementation, management, and monitoring.
Cloud computing, for example, is made possible by virtualization in which resources
such as CPU, memory, and I/O are provided as services to customers using Internet
technologies.
The ability to virtualize depends on the features provided by the CPU. If the features
are sufficient, then it is possible to write a VMM that provides a guest environment.
Otherwise, virtualization is impossible.
VMMs use several techniques to implement virtualization, including trap-and-emulate
and binary translation.
The important concept found in most virtualization options is the implementation of a
virtual CPU (VCPU).
The VCPU does not execute code. Rather, it represents the state of the CPU as the guest
machine believes it to be. For each guest, the VMM maintains a VCPU representing that
guest’s current CPU state.
When the guest is context-switched onto a CPU by the VMM, information from the
VCPU is used to load the right context, much as a general-purpose operating system
would use the PCB.
5.3.1 Trap-and-Emulate
On a typical dual-mode system, the virtual machine guest can execute only in user mode
(unless extra hardware support is provided). The kernel, of course, runs in kernel mode,
and it is not safe to allow user-level code to run in kernel mode.
Just as the physical machine has two modes, so must the virtual machine.
Consequently, we must have a virtual user mode and a virtual kernel mode, both of
which run in physical user mode.
Those actions that cause a transfer from user mode to kernel mode on a real machine
(such as a system call, an interrupt, or an attempt to execute a privileged instruction)
must also cause a transfer from virtual user mode to virtual kernel mode in the virtual
machine.
When the kernel in the guest attempts to execute a privileged instruction, that is an error
(because the system is in user mode) and causes a trap to the VMM in the real machine.
The VMM gains control and executes (or “emulates”) the action that was attempted by
the guest kernel on the part of the guest. It then returns control to the virtual machine.
This is called the trap-and-emulate
With privileged instructions, time becomes an issue. All non privileged instructions run
natively on the hardware, providing the same performance for guests as native
applications.
Privileged instructions create extra overhead, however, causing the guest to run more
slowly than it would natively.
In addition, the CPU is being multiprogrammed among many virtual machines, which
can further slowdown the virtual machines in unpredictable ways.
This problem has been approached in various ways.
❖ IBM VM, for example, allows normal instructions for the virtual machines to execute
directly on the hardware.
❖ Only the privileged instructions (needed mainly for I/O must be emulated and hence
execute more slowly.
In general, with the evolution of hardware, the performance of trap-and-emulate
functionality has been improved, and cases in which it is needed have been reduced.
❖ For example, many CPUs now have extra modes added to their standard dual-mode
operation.
❖ The VCPU need not keep track of what mode the guest operating system is in, because
the physical CPU performs that function.
❖ In fact, some CPUs provide guest CPU state management in hardware, so the VMM
need not supply that functionality, removing the extra overhead.
5.3.2 Binary Translation
If the guest VCPU is in user mode, the guest can run its instructions natively on a physical
CPU.
If the guest VCPU is in kernel mode, then the guest believes that it is running in kernel
mode. The VMM examines every instruction the guest executes in virtual kernel mode by
reading the next few instructions that the guest is going to execute, based on the guest’s
program counter.
Instructions other than special instructions are run natively. Special instructions are
translated into a new set of instructions that perform the equivalent task—for example,
changing the flags in the VCPU.
Binary translation is shown in Figure. It is implemented by translation code within the
VMM. The code reads native binary instructions dynamically from the guest, on demand,
and generates native binary code that executes in place of the original code.
Let’s consider another issue in virtualization: memory management, specifically the page
tables. How can the VMM keep page-table state both for guests that believe they are
managing the page tables and for the VMM itself?
A common method, used with both trap-and-emulate and binary translation, is to use
nested page tables (NPTs). Each guest operating system maintains one or more page
tables to translate from virtual to physical memory.
The VMM maintains NPTs to represent the guest’s page-table state, just as it creates a
VCPU to represent the guest’s CPU state. The VMM knows when the guest tries to
change its page table, and it makes the equivalent change in the NPT.
When the guest is on the CPU, the VMM puts the pointer to the appropriate NPT into the
appropriate CPU register to make that table the active page table.
If the guest needs to modify the page table (for example, fulfilling a page fault), then that
operation must be intercepted by the VMM and appropriate changes made to the nested
and system page tables
Unfortunately, the use of NPTs can cause TLB misses to increase, and many other
complexities need to be addressed to achieve reasonable performance.
Without some level of hardware support, virtualization would be impossible. The more
hardware support available within a system, the more feature-rich and stable the virtual
machines can be and the better they can perform.
Intel x86 CPU family, Intel added new virtualization support (the VT-x instructions), in
successive generations beginning in 2005. Now, binary translation is no longer needed.
In fact, all major general-purpose CPUs now provide extended hardware support for
virtualization. For example, AMD virtualization technology (AMDV) has appeared in
several AMD processors starting in 2006.
It defines two new modes of operation—host and guest—thus moving from a dual-mode
to a multimode processor.
The VMM can enable host mode, define the characteristics of each guest virtual machine,
and then switch the system to guest mode, passing control of the system to a guest
operating system that is running in the virtual machine.
In guest mode, the virtualized operating system thinks it is running on native hardware and
sees whatever devices are included in the host’s definition of the guest.
If the guest tries to access a virtualized resource, then control is passed to the VMM to
manage that interaction.
The functionality in Intel VT-x is similar, providing root and nonroot modes, equivalent to
host and guest modes. Both provide guest VCPU state data structures to load and save
guest CPU state automatically during guest context switches.
In addition, virtual machine control structures (VMCSs) are provided to manage guest
and host state, as well as various guest execution controls, exit controls, and information
about why guests exit back to the host.
AMD and Intel have also addressed memory management in the virtual environment. With
AMD’s RVI and Intel’s EPT memory-management enhancements, VMMs no longer need
to implement software NPTs.
In essence, these CPUs implement nested page tables in hardware to allow the VMM to
fully control paging while the CPUs accelerate the translation from virtual to physical
addresses.
The NPTs add a new layer, one representing the guest’s view of logical-to-physical
address translation.
The CPU page-table walking function (traversing the data structure to find the desired
data) includes this new layer as necessary, walking through the guest table to the VMM
table to find the physical address desired.
A TLB miss results in a performance penalty, because more tables (the guest and host
page tables) must be traversed to complete the lookup.
Figure 5.4 shows the extra translation work performed by the hardware to translate from a
guest virtual address to a final physical address. I/O is another area improved by hardware
assistance.
Consider that the standard direct-memory-access (DMA) controller accepts a target
memory address and a source I/O device and transfers data between the two without
operating-system action.
Without hardware assistance, a guest might try to set up a DMA transfer that affects the
memory of the VMM or other guests. In CPUs that provide hardware-assisted DMA (such
as Intel CPUs with VT-d), even DMA has a level of indirection.
First, the VMM sets up protection domains to tell the CPU which physical
memory belongs to each guest.
Next, it assigns the I/O devices to the protection domains, allowing them direct
access to those memory regions and only those regions.
The hardware then transforms the address in a DMA request issued by an I/O
device to the host physical memory address associated with the I/O.
In this manner, DMA transfers are passed through between a guest and a device without
VMM interference.
Similarly, interrupts must be delivered to the appropriate guest and must not be visible to
other guests.
By providing an interrupt remapping feature, CPUs with virtualization hardware
assistance automatically deliver an interrupt destined for a guest to a core that is currently
running a thread of that guest.
That way, the guest receives interrupts without any need for the VMM to intercede in
their delivery.
Without interrupt remapping, malicious guests could generate interrupts that could be
used to gain control of the host system
Whatever the hypervisor type, at the time a virtual machine is created, its creator gives the
VMM certain parameters.
These parameters usually include:
❖ the number of CPUs,
❖ amount of memory,
❖ networking details,
❖ and storage details the VMM will take into account when creating the guest.
For example, a user might want to create a new guest with two virtual CPUs, 4 GB of
memory, 10 GB of disk space, one network interface that gets its IP address via DHCP, and
access to the DVD drive.
The VMM then creates the virtual machine with those parameters.
In the case of a type 0 hypervisor, the resources are usually dedicated.
For other hypervisor types, the resources are dedicated or virtualized, depending on the
type.
Finally, when the virtual machine is no longer needed, it can be deleted. When this
happens, the VMM first frees up any used disk space and then removes the configuration
associated with the virtual machine, essentially forgetting the virtual machine.
Creating a virtual machine from an existing one can be as easy as clicking the “clone”
button and providing a new name and IP address. This ease of creation can lead to virtual
machine sprawl, which occurs when there are so many virtual machines on a system that
their use, history, and state become confusing and difficult to track.
Type 0 hypervisors have existed for many years under many names, including
“partitions” and “domains.”
They are a hardware feature, and that brings its own positives and negatives.
Operating systems need do nothing special to take advantage of their features.
A type 1 hypervisor acts like a lightweight operating system and runs directly on the
host’s hardware.
The most commonly deployed type of hypervisor is the type 1 or bare-metal hypervisor,
where virtualization software is installed directly on the hardware where the operating
system is normally installed.
Because bare-metal hypervisors are isolated from the attack-prone operating system, they
are extremely secure.
In addition, they generally perform better and more efficiently than hosted hypervisors.
For these reasons, most enterprise companies choose bare-metal hypervisors for data
centre computing needs.
By using type 1 hypervisors, data-center managers can control and manage the operating
systems and applications in new and sophisticated ways.
An important benefit is the ability to consolidate more operating systems and applications
onto fewer systems.
❖ For example, rather than having ten systems running at 10 percent utilization each, a data
center might have one server manage the entire load.
If utilization increases, guests and their applications can be moved to less-loaded systems
live, without interruption of service.
Using snapshots and cloning, the system can save the states of guests and duplicate those
states.
Another type of type 1 hypervisor includes various general-purpose operating systems
with VMM functionality.
❖ Here, an operating system such as Red- Hat Enterprise Linux, Windows, or Oracle Solaris
performs its normal duties as well as providing a VMM allowing other operating systems to run as
guests.
Type-2 Hypervisor– runs on the operating system of the physical host machine, hence
they are also called hosted hypervisors.
These hypervisors are hosted on the operating system, and the hypervisor runs on that
A physical server
OS installed on that server hardware (OS like Windows, Linux, macOS)
Type-2 hypervisor on that OS
Virtual machine instances/guest VMs
These hypervisors are usually used in environments where there are a small number of
servers.
They do not need a separate management console to set up and manage the virtual
machines. These operations can typically be done on the server that has the hypervisor
hosted. This hypervisor is basically treated as an application on your host system.
Simple management:
They are convenient for testing any new software or research projects. You can simply
run multiple instances with different OSes to test how the software works in each
environment.
5.4.6 Paravirtualization:
This meant that the guest operating system’s kernel code must have been changed from
the default code to these Xen-specific methods.
To optimize performance, Xen allowed the guest to queue up multiple page-table changes
asynchronously via hypercalls and then checked to ensure that the changes were complete
before continuing operation.
Xen allowed virtualization of x86 CPUs without the use of binary translation, instead
requiring modifications in the guest operating systems like the one described above.
Over time, Xen has taken advantage of hardware features supporting virtualization. As a
result, it no longer requires modified guests and essentially does not need the
paravirtualization method.
Paravirtualization is still used in other solutions, however, such as type 0 hypervisors
❖ For example, Oracle’s Java has many features that depend on its running in the
Java virtual machine (JVM), including specific methods for security and memory
management.
Java programs run within the JVM environment, and the JVM is compiled to be a native
program on systems on which it runs.
❖ This arrangement means that Java programs are written once and then can run on any
system (including all of the major operating systems) on which a JVM is available.
The same can be said of interpreted languages, which run inside programs that read
each instruction and interpret it into native operations.
5.4.8 Emulation
Emulation is useful when the host system has one system architecture, and the guest
system was compiled for a different architecture.
❖ For example, suppose a company has replaced its outdated computer system
with a new system but would like to continue to run certain important programs that were
compiled for the old system. The programs could be run in an emulator that translates
each of the outdated system’s instructions into the native instruction set of the new
system.
CPU and memory resources can be divided among the zones and the system-wide
processes. Each zone, in fact, can run its own scheduler to optimize the performance of
its applications on the allotted resources.
The above figure shows a Solaris 10 system with two containers and the standard “global” user
space.
Containers are much lighter weight than other virtualization methods. That is, they use
fewer system resources and are faster to instantiate and destroy, more similar to processes
than virtual machines. For this reason, they are becoming more commonly used,
especially in cloud computing.
FreeBSD was perhaps the first operating system to include a container-like feature
(calle“jails”)
Linux added the LXC container feature in 2014. It is now included in the common Linux
distributions via a flag in the clone() system call.
Containers are also easy to automate and manage, leading to orchestration tools like
docker and Kubernetes.
Orchestration tools are means of automating and coordinating systems and services.
Their aim is to make it simple to run entire suites of distributed applications, just as
operating systems make it simple to run a single program.
In this section, we take a deeper dive into the operating-system aspects of virtualization,
including how the VMM provides core operating-system functions like scheduling, I/O,
and memory management.
❖ When there are enough CPUs to allocate the requested number to each
guest, the VMM can treat the CPUs as dedicated and schedule only a given guest’s
threads on that guest’s CPUs. In this situation, the guests act much like native
operating systems running on native CPUs.
The VMM itself needs some CPU cycles for guest management and I/O management and
can steal cycles from the guests by scheduling its threads across all of the system CPUs,
but the impact of this action is relatively minor.
More difficult is the case of overcommitment, in which the guests are configured for
more CPUs than exist in the system. Here, a VMM can use standard scheduling
algorithms to make progress on each thread but can also add a fairness aspect to those
algorithms.
❖ For example, if there are six hardware CPUs and twelve guest
allocated CPUs, the VMM can allocate CPU resources proportionally, giving each
guest half of the CPU resources it believes it has.
❖ The VMM can still present all twelve virtual CPUs to the guests, but in
mapping them onto physical CPUs, the VMM can use its scheduler to distribute
them appropriately.
Within a virtual machine, this operating system receives only what CPU resources the
virtualization system gives it. A 100-millisecond time slice may take much more than 100
milliseconds of virtual CPU time.
Depending on how busy the system is, the time slice may take a second or more, resulting in
very poor response times for users logged into that virtual machine. The effect on a real-time
operating system can be even more serious.
❖ Next, the VMM computes a target real-memory allocation for each guest
based on the configured memory for those guest and other factors, such as over
commitment and system load.
i) One approach is to provide double paging. Here, the VMM has its own
pagereplacement algorithms and loads pages into a backing store that the guest believes is
physical memory.
ii) A common solution is for the VMM to install in each guest a pseudo– device driver or
kernel module that the VMM controls. (A pseudo–device driver uses device-driver interfaces,
appearing to the kernel to be a device driver, but does not actually control a device. Rather, it is an
easy way to add kernel-mode code without directly modifying the kernel.) This balloon memory
manager communicates with the VMM and is told to allocate or deallocate memory.
If told to allocate, it allocates memory and tells the operating system to pin the
allocated pages into physical memory.
Pinning locks a page into physical memory so that it cannot be moved or paged
out.
iii) Another common method for reducing memory pressure is for the VMM to determine
if the same page has been loaded more than once. If this is the case, the VMM reduces the number
of copies of the page to one and maps the other users of the page to that one copy.
5.5.3 I/O
If multiple operating systems have been installed, what and where is the boot disk.
❖ The solution to this problem depends on the type of hypervisor. Type 0 hypervisors
often allow root disk partitioning, partly because these systems tend to run fewer guests than other
systems.
❖ Alternatively, a disk manager may be part of the control partition, and that disk
manager may provide disk space (including boot disks) to the other partitions.
❖ Type 1 hypervisors store the guest root disk (and configuration information) in one or
more files in the file systems provided by the VMM.
❖ Type 2 hypervisors store the same information in the host operating system’s file
systems.
❖ A disk image, containing all of the contents of the root disk of the guest, is contained
in one file in the VMM.
Moving a virtual machine from one system to another that runs the same VMM is as
simple as halting the guest, copying the image to the other system, and starting the guest there.
Guests sometimes need more disk space than is available in their root disk image
❖ For example, a nonvirtualized database server might use several file systems
spread across many disks to store various parts of the database. Virtualizing such a database
usually involves creating several files and having the VMM present those to the guest as disks.
The guest then executes as usual, with the VMM translating the disk I/O requests coming from
the guest into file I/O commands to the correct files.
One feature not found in general-purpose operating systems but found in type 0 and type
1 hypervisors is the live migration of a running guest from one system to another.
A running guest on one system is copied to another system running the same VMM.
The copy occurs with so little interruption of service that users logged in to the guest, as
well as network connections to the guest, continue without noticeable impact.
This rather astonishing ability is very powerful in resource management and hardware
administration.
After all, compare it with the steps necessary without virtualization
The source VMM establishes a connection with the target VMM and confirms that it is
allowed to send a guest.
The target creates a new guest by creating a new VCPU, new nested page table, and other
state storage.
The source repeats step 4, because during that step some pages were probably modified
by the guest and are now dirty. These pages need to be sent again and marked again as
clean.
When the cycle of steps 4 and 5 becomes very short, the source VMM freezes the guest,
sends the VCPU’s final state, other state details, and the final dirty pages, and tells the
target to start running the guest. Once the target acknowledges that the guest is running,
the source terminates the guest.
Live migration is possible. It means that most of the guest’s state is maintained
within the guest— for example, open file tables, system-call state, kernel state,
and so on.
Used disk space is usually much larger than used memory
Disks associated with the guest cannot be moved as part of a live migration.
Live migration makes it possible to manage data centers in entirely new ways. For
example, virtualization management tools can monitor all the VMMs in an environment
and automatically balance resource use by moving guests between the VMMs.
These tools can also optimize the use of electricity and cooling by migrating all guests off
selected servers if other servers can handle the load and powering down the selected
servers entirely.
If the load increases, the tools can power up the servers and migrate guests back to them.
5.6.1 iOS
iOS is a mobile operating system designed for the iPhone smartphone and iPad tablet
computer.
❖ This layer defines the software interface that allows users to interact with the computing devices.
macOS uses the Aqua user interface, which is designed for a mouse or trackpad, whereas iOS uses the
Springboard user interface, which is designed for touch devices.
❖ This layer includes the Cocoa and Cocoa Touch frameworks, which provide an API for the
Objective-C and Swift programming languages.
❖ The primary difference between Cocoa and Cocoa Touch is that the former is used for
developing macOS applications, and the latter by iOS to provide support for hardware features
unique to mobile devices, such as touch screens.
Core frameworks.
❖ This layer defines frameworks that support graphics and media including, Quicktime and
OpenGL.
Kernel environment
❖ This environment, also known as Darwin, includes the Mach microkernel and the BSD UNIX
kernel.
Darwin’s structure is shown in Figure: 5.12
Darwin provides two system-call interfaces: Mach system calls (known as traps) and
BSD system calls (which provide POSIX functionality).
The interface to these system calls is a rich set of libraries that includes not only the
standard C library but also libraries that provide networking, security, and programming
language support
Fundamental Operating system services provided by iOS
o ❖ Memory management,
o ❖ CPU scheduling,
o ❖ interprocess communication (IPC) facilities such as message passing and
remote procedure calls (RPCs).
Much of the functionality provided by Mach is available through kernel abstractions,
which include tasks (a Mach process), threads, memory objects, and ports (used for IPC).
The kernel environment provides an I/O kit for development of device drivers and
dynamically loadable modules (which macOS refers to as kernel extensions, or kexts).
5.6.2 ANDROID
The Android operating system was designed by the Open Handset Alliance (led primarily
by Google) and was developed for Android smartphones and tablet computers.
Whereas iOS is designed to run on Apple mobile devices and is close-sourced, Android
runs on a variety of mobile platforms and is opensourced.
Because Android can run on an almost unlimited number of hardware devices, Google
has chosen to abstract the physical hardware through the hardware abstraction layer, or
HAL.
By abstracting all hardware, such as the camera, GPS chip, and other sensors, the HAL
provides applications with a consistent view independent of specific hardware.
This feature, of course, allows developers to write programs that are portable across
different hardware platforms.
The standard C library used by Linux systems is the GNU C library (glibc). Google
instead developed the Bionic standard C library for Android.
Not only does Bionic have a smaller memory footprint than glibc, but it also has been
designed for the slower CPUs that characterize mobile devices.
At the bottom of Android’s software stack is the Linux kernel. Google has modified the
Linux kernel used in Android in a variety of areas to support the special needs of mobile
systems, such as power management.
It has also made changes in memory management and allocation and has added a new
form of IPC known as Binder
REVIEW QUESTIONS
PART-A (2-Marks)
Fidelity.
❖ AVMM provides an environment for programs that is essentially identical to
the original machine.
Performance.
❖ Programs running within that environment show only minor performance
decreases
Safety.
❖ The VMM is in complete control of system resources.
➢ In Type 0 Hypervisor the VMM itself is encoded in the firmware and loaded at boot time. In
turn, it loads the guest images to run in each partition.
➢ A type 0 hypervisor can run multiple guest operating systems (one in each hardware partition).
➢ All those guests, because they are running on raw hardware, can in turn be VMMs.
➢A type 1 hypervisor acts like a lightweight operating system and runs directly on
➢Type-2 Hypervisor– runs on the operating system of the physical host machine,
➢These hypervisors are hosted on the operating system, and the hypervisor runs on
A physical server
Simple management:
❖ They essentially act as management consoles. There is no need to install a separate software
package to manage the virtual machines running on type-2 hypervisors. Useful for testing purposes:
❖ They are convenient for testing any new software or research projects. You can simply run
multiple instances with different OSes to test how the software works in each environment.
➢ Emulation is useful when the host system has one system architecture, and the guest system
was compiled for a different architecture.
➢ For example, suppose a company has replaced its outdated computer system with a new system
but would like to continue to run certain important programs that were compiled for the old system. The
programs could be run in an emulator that translates each of the outdated system’s instructions into the
native instruction set of the new system.
➢ The major challenge of emulation is performance. Instruction-set emulation may run an order
of magnitude slower than native instructions, because it may take ten instructions on the new system to
read, parse, and simulate an instruction from the old system.
➢ Another challenge for emulator writers is that it is difficult to create a correct emulator
because, in essence, this task involves writing an entire CPU in software.
➢ Virtual Machine Monitor, also called a “hypervisor,” this is one of many hardware
virtualization techniques that allow multiple operating systems, termed guests, to run concurrently on a
host computer.
➢ iOS is a mobile operating system designed for the iPhone smartphone and iPad tablet
computer.
16. What are the system call interfaces provided by the Darvin’s layered system.
➢ Darwin provides two system-call interfaces: Mach system calls (known as traps) and BSD
system calls (which provide POSIX functionality).
17. List down the fundamental operating system services provided by the iOS.
➢ Memory management,
➢ CPU scheduling,
➢ Interprocess communication (IPC) facilities such as message passing and remote procedure
calls (RPCs).
➢ The Android operating system was designed by the Open Handset Alliance (led primarily by
Google) and was developed for Android smartphones and tablet computers.
➢ Whereas iOS is designed to run on Apple mobile devices and is close-sourced, Android runs
on a variety of mobile platforms and is open sourced.
➢ Live migration refers to the process of moving a virtual machine (VM) running on one physical
host to another host without disrupting normal operations or causing any downtime or other adverse effects
for the end user. Live migration is considered a major step in virtualization.
requests are separated from the physical recompiled prior to installation inside a VM
hardware that facilitates them
Guest OS issues hardware calls to access Guest OS directly communicates with the
hardware hypervisor using drivers
It run’s directly on the host hardware Runs on operating systems similar to other
computer programs
PART-B
1. What is Virtual Machine? Elaborate on the History, benefits, and features of Virtual Machine with
examples.
2. Discuss briefly about the building blocks of virtualization with neat diagrams.
3. Explain briefly about the different types of Virtual machines with neat diagrams.
4. What is Hypervisor? Discuss about the different types of virtualization with neat diagrams.
5. Write short notes on Full Virtualization and Para Virtualization with neat diagrams.
6. Explain briefly about virtualization and operating system components with examples.
7. What is Live Migration? Discuss about the steps involved in Live Migration with neat diagrams.