0% found this document useful (0 votes)
60 views8 pages

Empirical Study of Virtual Disks Performance With KVM On Das

Uploaded by

shaahin13631363
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
60 views8 pages

Empirical Study of Virtual Disks Performance With KVM On Das

Uploaded by

shaahin13631363
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

IEEE International Conference on Advances in Engineering & Technology Research (ICAETR - 2014),

August 01-02, 2014, Dr. Virendra Swarup Group of Institutions, Unnao, India

Empirical Study of Virtual Disks Performance with


KVM on DAS
Gauri Joshi S.T. Shingade M.R. Shirole
Computer Science Department, Computer Science Department, Computer Science Department,
VJTI, Matunga VJTI, Matunga VJTI, Matunga
Mumbai, India Mumbai, India Mumbai, India
jo.6.gauri@gmail.com stshingade@vjti.org.in mrshirole@vjti.org.in

Abstract— There is exponentially increasing demand of data Basically, virtualization is a technique that divides a
generation, its storage, access and communication. To fulfil the physical computer into several isolated machines known as
demands, concept called Cloud Computing came into the picture. virtual machines (VMs). Client or server operating system is
The key concept operating at the basic level of cloud computing required if VMs are created within a hypervisor or a
stack is a Virtualization. Virtual machine (VM) state is virtualization platform. A virtual machine (VM) has been
represented as a virtual disk file (image) that is created on the serving as a crucial component in cloud computing with its rich
hypervisor’s local file system, from where virtual machine is set of features [15]. Multiple virtual machines can run on a host
booted up. Virtual machine requires minimum one disk to boot computer, each possessing its own operating system and
and start its function. Within guest operating system, one can use
applications.
block devices or files as virtual disks with Kernel-based Virtual
Machine (KVM). Till the time, no empirical study has been Virtual disk is created on the hypervisor’s local file system,
performed on different types of virtual disk image formats to from where virtual machine is booted up. Virtual disks can be
quantify their runtime performance. We have studied created via different processes, for example, virtual machine
representative application workload: I/O micro-benchmarks on a creation process, or independent inside storage repository.
local file system i.e. direct-attached storage (DAS) environment in Block devices or files can be used as a local storage in the
conjunction with RAW, Copy-on-Write scheme QCOW2 from guest operating systems, with KVM. Files are commonly
QEMU, Microsoft’s VHD, Virtualbox’s VDI, VMWARE’s
known as virtual disk image files due to following reasons [4]:
VMDK and parallel’s HDD. We have also investigated the impact
of block size on applications runtime performance. This paper • Disk image files are available to the hypervisor as
seeks to provide the detailed runtime performance analysis of files.
different image formats based on different parameters such as
latency, bandwidth, IOs performed per second (IOPS). Today • Similar to block devices, disk image files represent
users have a choice to select virtual disks from the pool of virtual a local mass storage disk.
disk image formats. But, currently it’s a black box selection for
users as no comparison or decision model exist for different A disk image file can be considered as a local hard disk for the
virtual disk image formats. This study is done to provide insights guest operating system. The maximum size of the virtual disk
into the performance aspect of various virtual disk image formats is equal to the size of the disk image file. A disk image file of
and offer guidelines to virtual disk end users in implementing 50 GB can create virtual disk of 50 GB. The virtual disk
and using them respectively. location may be outside the domain of the guest operating
system and the virtual machine. Guest operating system has
Keywords- Virtualization, KVM hypervisor, Virtual machine, limited access and rights. It can access only information related
Virtual disk, fio, Virtual disk image formats. to size of the virtual disk. As shown in Fig. 1along with the
DAS, the storage space for virtual machines’ virtual disks can
I. INTRODUCTION be allocated from multiple sources such as network-attached
The performance of the cloud has become an important due storage (NAS), or storage area network (SAN) each having
to increasing workload [1]. The key concept operating at lower different performance, reliability, and availability at different
level of cloud computing stack is Virtualization. For the prices. DAS is at least several times cheaper than NAS and
majority of high-performing clouds the underpinning is a SAN, but DAS limits the availability and mobility of VMs [2].
virtualized infrastructure. Virtualization has been in data In this paper, we have conducted a performance study using
centres for several years as a successful IT strategy for I/O micro-benchmarks workloads on a local file system
consolidating servers. The main purpose to design a environment. Virtual disks RAW, Copy-on-Write scheme
virtualization is to pool infrastructure resources. Apart from QCOW2 from QEMU, Microsoft’s VHD, Virtualbox’s VDI,
such operation it provides agility and flexibility to the cloud VMWARE’s VMDK and parallel’s HDD are evaluated against
environment. “Virtualization, in computing, is the creation of a bandwidth, latency and IOPS parameters. Also, we have
virtual version of something, such as a hardware platform, studied the impact of block size and file size at the hypervisor
operating system, a storage device or network resources” [6]. level on application runtime performance.

978-1-4799-6393-5/14/$31.00 ©2014 IEEE


IEEE International Conference on Advances in Engineering & Technology Research (ICAETR - 2014),
August 01-02, 2014, Dr. Virendra Swarup Group of Institutions, Unnao, India

which leads to severe delays in a heavily loaded cloud


environment [1].

VM VM
Guest Guest
OS OS

Appli- Appli-
cation cation

Virtual Machine Monitor (VMM)

Disk
VM Image VM Image
Data Data

Figure 2. VM environment

Figure 1. Allocation techniques of virtual disks Subsequent instances that use the same image on that host
can start up faster as the image is locally available. An
The remaining paper is organized as follows. Section 2 alternative method to address this issue is to transfer the image
provides background information and related work. Section 3 data in an on-demand streaming fashion, where the parts of an
describes the methodology of our performance study. Section 4 image are copied as needed from the shared storage system to
presents the results of the study with detailed analysis of one of hypervisor hosts. This scheme is used by cloud operating
the scenario. Section 5 presents concluding remarks and environments such as IBM SmartCloud Provisioning (SCP)
directions for future work. [1].

II. BACKGROUND AND RELATED WORK VM images can be stored in different formats. The most
straightforward option is to use the RAW format, where I/O
Virtualization is a logical disk a computer uses to perform requests to the virtual disk are served via a simple block-to-
I/O operation. Virtualization is used to boot the operating block address mapping. In order to support multiple VMs
system. The VM environment is shown in Fig. 2. A hard disk running on the same base image, copy-on-write techniques
image is interpreted by a Virtual Machine Monitor as a system have been widely used, where a local snapshot is created for
hard disk drive. each VM to store all modified data blocks. The underlying
Infrastructure as a service (IaaS) cloud encapsulates user image files remain unchanged until new images are captured.
applications into virtual machines. The VMs are distributed on There are different copy-on-write schemes, including
a large number of compute nodes to share the physical QEMU QCOW2, Microsoft’s VHD, Virtualbox’s VDI,
infrastructure. Virtualization enables many features such as VMWARE’s VMDK and parallel’s HDD and so forth. In some
consolidation for improving resource efficiency, live migration schemes, such as QCOW2, a separate file is created to store all
for easier maintenance, and so forth. The hard disk drive of a data blocks that have been modified by the provisioned VM.
virtual machine (i.e., virtual disk) is typically emulated with a
regular file on the hypervisor host (i.e., VM image file). I/O There are many efforts on benchmarking application
requests received at virtual disks are translated by the runtime performance of virtual disks on local file system. Our
virtualization driver to regular file I/O requests to the image paper analyzes the impact of different VM image formats on
files. application runtime performance on local file system. The
focus of efforts is on performance study of virtual disks
A typical IaaS cloud, such as Amazon Elastic Compute
Cloud (EC2), has thousands of VM images. In order to create a III. METHODOLOGY
new VM instance in an IaaS cloud, a VM image needs to be To represent a typical virtualization environment, we set up
available at its hypervisor host. As illustrated in Figure 1, one experiment testbed as shown in Fig. 3. We then configure the
straightforward solution is to pre-copy the entire image to the hypervisors to use the various virtual disks described in the
compute nodes before a new VM is started. If an instance uses previous section. Application workloads are executed on this
an image that the target hypervisor does not have, it may take a testbed using these virtual disk configurations
long time to start up that instance. A typical VM image file
contains multiple gigabytes or even tens of gigabytes of data,

978-1-4799-6393-5/14/$31.00 ©2014 IEEE


IEEE International Conference on Advances in Engineering & Technology Research (ICAETR - 2014),
August 01-02, 2014, Dr. Virendra Swarup Group of Institutions, Unnao, India

A. EXPERIMENT ENVIRONMENT Num threads – Number of threads or processes spread in this


Our experiment testbed, consists of one machine with workload.
minimum configuration, Intel (R) Core (TM) i5-3210M CPU Along with the above basic parameters defined for a
@ 2.50GHz, 4GB RAM with virtualization extensions as workload, there are a number of parameters that modify other
shown in Fig. 3. The machine is used as compute node with aspects of the job. Below is the sample example of fio script
Ubuntu 12.04 LTS along with KVM hypervisor. Six VMs are that is writing sequentially to a file.
created on this machine. Each is provided with 10GB volume ; -- start job file --
and 512MB RAM. [seq-writers]
ioengine=libaio
Virtual Virtual iodepth=4
Machine Machine rw=write
Linux Linux bs=8k
Applications Applications direct=0
Linux Guest Linux Guest size=128m
OS OS numjobs=4
Linux ; -- end job file –
RAW/QCO VDI/VHD Applic
ations No global section is defined here. Here depth of 4 for the
W2/VMDK /HDD file is used along with async IO. Number of jobs is equal to
four that forks four identical jobs. The four processes will write
KVM
randomly to their 64MB file. We have set the block size to 8K
Linux (Ubuntu 12.04 LTS) and buffered IO is used as direct is not set to true. The output
of fio job file looks like below.
Intel (R) Core (TM) i5-3210M CPU @ 2.50GH,
4GB RAM with virtualization extensions seq-write: (g=0): rw=write, bs=8K-8K/8K-8K,
ioengine=libaio, iodepth=4
...
Figure 3. Hardware setup for performance study seq-write: (g=0): rw=write, bs=8K-8K/8K-8K,
ioengine=libaio, iodepth=4
B. APPLICATION WORKLOAD fio 1.59
Starting 4 processes
We use I/O micro-benchmarks workload to characterize the seq-write: Laying out IO file(s) (1 file(s) / 128MB)
VM runtime performance under different conditions. seq-write: Laying out IO file(s) (1 file(s) / 128MB)
seq-write: Laying out IO file(s) (1 file(s) / 128MB)
IO Micro-benchmarks: We use a benchmark tool called seq-write: Laying out IO file(s) (1 file(s) / 128MB)
fio. fio is a tool used to avoid writing test cases again and again
for simulating IO workloads. seq-write: (groupid=0, jobs=1): err= 0: pid=2740
write: io=131072KB, bw=403124 B/s, iops=49 ,
Steps to stimulate desired IO workload using fio are, runt=332944msec
slat (usec): min=4 , max=74700 , avg=140.89,
• Write a job file describing a specific setup. It may stdev=2318.99
contain any number of threads, jobs and/or files. clat (msec): min=40 , max=254 , avg=81.14,
• Run the job file. fio parses this file and sets stdev=14.12
everything as declared in the file while executing lat (msec): min=44 , max=254 , avg=81.28,
the job file. stdev=14.12
bw (KB/s) : min= 263, max= 560, per=25.00%,
The job file consists of global section and one or more job
avg=393.55, stdev=36.60
sections. Global section defines shared parameters, and the job cpu : usr=0.09%, sys=0.22%, ctx=16196, majf=0,
section describes the job involved. Job contains the following minf=20
basic parameters if we break down it from top to bottom [5]: IO depths : 1=0.1%, 2=0.1%, 4=100.0%, 8=0.0%,
16=0.0%, 32=0.0%, >=64=0.0%
IO type – It is IO patterns (sequential read, sequential write, submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%,
random read, random write or even combination of reads and 32=0.0%, 64=0.0%, >=64=0.0%
writes, sequentially or randomly) issued to the file(s). complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%,
Block size – Chunk size (single value, or range of block sizes) 32=0.0%, 64=0.0%, >=64=0.0%
used to issue IO. issued r/w/d: total=0/16384/0, short=0/0/0
lat (msec): 50=0.01%, 100=91.44%, 250=8.53%,
IO size - How much data is going to be read/write. 500=0.02%
IO engine – It is the way of IO issue. IO could be memory -
mapping the file, using regular read/write, using splice, async -
IO, syslet, or even SG (SCSI generic sg). -
IO depth - If the IO engine is async, it is a maximum size of seq-write: (groupid=0, jobs=1): err= 0: pid=2743
queuing depth to maintain. write: io=131072KB, bw=403070 B/s, iops=49 ,
runt=332988msec
IO type – It can be buffered IO, or direct/raw IO. slat (usec): min=4 , max=104502 , avg=59.83,
Num files – Number of files spread in the workload. stdev=1466.12

978-1-4799-6393-5/14/$31.00 ©2014 IEEE


IEEE International Conference on Advances in Engineering & Technology Research (ICAETR - 2014),
August 01-02, 2014, Dr. Virendra Swarup Group of Institutions, Unnao, India

Run status group 0 (all jobs):


WRITE: io=524288KB, aggrb=1574KB/s, minb=403KB/s,
maxb=403KB/s, mint=332944msec, maxt=332988msec

Disk stats (read/write):


sda: ios=0/16657, merge=0/49577, ticks=0/1337768,
in_queue=1337900, util=100.00%

In the output the client number, group id, process id and


error of that thread are printed. Below is the IO statistics, for
above example.
io - Number of megabytes io performed which is in KB.
bw – Average bandwidth rate.
iops – Average IOs performed per second Figure 4. Sequential Read, Bandwidth
runt - Runtime of the thread
slat - Submission latency, it is time taken to submit the io.
clat - Completion latency, it is the time from io submission to
completion.
lat – Latency, it is fairly new metric. It is not documented in
the man page. Looking at the code, it appears that this metric
starts the moment the IO struct is created in fio and is
completed right after clat, making this the one that best
represents what applications will experience. It’s not slat plus
clat.
bw – Bandwidth, it denotes approximate percentage of total
aggregate bandwidth particular thread received in this group. If
the threads in this group are on the same disk, then the last
value is really useful since they compete for same disk access.
cpu - CPU usage, it shows usage of the system and user, along Figure 5. Sequential Read, Latency
with the number of context switches of thread went and finally
the number of major and minor page faults.
In our experiments, we have concentrated on Bandwidth
(KB/s), Latency (ms) and IOPS parameters.
IV. EXPERIMENT RESULTS
We have performed experiments on IO Micro-benchmark
application workload in the virtualized environment. These
experiments test the application runtime performance based on
following parameters:
File Size = 64MB to 2048MB,
Block Size = 4KB to 128KB and
IO Patterns = Sequential Read, Sequential Write, Random
Read, Random Write, Random Read Write (RW) Figure 6. Sequential Read, IOPS

Analysis is done based on bandwidth (KB/s), latency (ms) For Sequential Read, Fig.4, Fig. 5, and Fig. 6 explore
and IOPS parameters. We have observed that results may vary bandwidth, latency, and IOPS for different Virtual Hard Disks.
according to the file size and block size. So, we have As shown in Fig. 4 RAW and QCOW2 perform well for
performed a number of experiments to see the variations in different values of file size and block size. VDI works well if
application runtime performance on virtual disks. For detailed block size is less and VHD works well if file size is large.
analysis we have selected below scenario. Latency time is very less for RAW, QCOW2 and for HDD,
VHD it’s very high as shown in Fig. 5. As shown in Fig. 6
File size = 1024MB. RAW performs large number of sequential read I/O operations
per second. QCOW2 performs moderate no. of I/O operations.
Block Size = 4KB to 128KB and HDD performs very less operations. We have observed that
IO Patterns = Sequential Read, Sequential Write, Random IOPS value for VDI increases as file size increases.
Read, Random Write, Random Read Write (RW) Preferable sequence of Virtual Hard Disks for Sequential Read
is RAW, QCOW2, VDI, VMDK, VHD, HDD.

978-1-4799-6393-5/14/$31.00 ©2014 IEEE


IEEE International Conference on Advances in Engineering & Technology Research (ICAETR - 2014),
August 01-02, 2014, Dr. Virendra Swarup Group of Institutions, Unnao, India

Figure 10. Sequential Write, Bandwidth


Figure 7. Random Read, Bandwidth

Figure 11. Sequential Write, Latency

Figure 8. Random Read, Latency

Figure 12. Sequential Write, IOPS

Figure 9. Random Read, IOPS For Sequential Write, Fig.10, Fig. 11, and Fig. 12 explore
bandwidth, latency, and IOPS for different Virtual Hard Disks.
For Random Read, Fig.7, Fig. 8, and Fig. 9 explore As shown in Fig. 10 VDI and VMDK are best suitable for this
bandwidth, latency, and IOPS for different Virtual Hard Disks. operation. HDD is also better. RAW and QCOW2 are not so
As shown in Fig. 7 RAW has highest bandwidth in almost all preferable. Performance of RAW and QCOW2 increases with
conditions. QCOW2 is almost close to RAW in many the large block size. VDI takes very less time (latency time) to
configurations. All other virtual disks are not so suitable for complete operation in different situations. RAW and QCOW2
this operation if bandwidth is the selection criteria. Latency take highest time to complete the operation as shown in Fig.
time is very less for Raw and Qcow2 as shown in Fig. 8. As 11. As shown in Fig. 12, VDI and VMDK perform large no. of
shown in Fig. 9 RAW performs large no. of sequential read I/O sequential write I/O operations per second. VHD and HDD
operations per second. QCOW2 also performs well here. perform moderate no. of operations. RAW and QCOW2 are
Considering all the experiments and results we have performed not preferable at all.
the preferable sequence of Virtual Hard Disks for Random Preferable sequence of Virtual Hard Disks for Sequential Write
Read is RAW, QCOW2, VDI, VMDK, VHD and HDD. is VDI, VMDK, VHD, HDD, RAW and QCOW2.

978-1-4799-6393-5/14/$31.00 ©2014 IEEE


IEEE International Conference on Advances in Engineering & Technology Research (ICAETR - 2014),
August 01-02, 2014, Dr. Virendra Swarup Group of Institutions, Unnao, India

Figure 13. Random Write, Bandwidth

Figure 16. Random RW(50), Bandwidth

Figure 14. Random Write, Latency

Figure 17. Random RW(50), Latency


Figure 15. Random Write, IOPS

For Random Write, Fig.13, Fig. 14, and Fig. 15 explore


bandwidth, latency, and IOPS for different Virtual Hard Disks.
As shown in Fig. 13 VDI and VMDK are best suitable for this
operation. HDD is also better. RAW and QCOW2 are not so
preferable if bandwidth is the selection parameter. Performance
of RAW and QCOW2 increases with the large block size. VDI
takes very less time (latency time) to complete operation in
different situations. VHD, HDD, QCOW2 and RAW have
highest latency while performance operation as shown in Fig.
14. As shown in Fig. 15, RAW and QCOW2 perform large no.
of sequential write I/O operations per second. The preferable
sequence for this IO pattern is purely based on selection
parameter (Bandwidth/latency/IOPS).
In general preferable sequence of Virtual Hard Disks for
Random Write is VDI, VMDK, RAW, QCOW2, HDD and
VHD.
Figure 18. Random RW(50), IOPS

978-1-4799-6393-5/14/$31.00 ©2014 IEEE


IEEE International Conference on Advances in Engineering & Technology Research (ICAETR - 2014),
August 01-02, 2014, Dr. Virendra Swarup Group of Institutions, Unnao, India

For Random Read Write operations with equal mix read


and write ratios Fig.16, Fig. 17, and Fig. 18 explore bandwidth,
latency, and IOPS for different Virtual Hard Disks. Equal mix
of read and write ratio indicates percentage of reads and writes
are 50:50. The result varies based on percentage of reads and
writes operations. As shown in Fig. 16 VDI and VMDK
perform well in both read and write operations and for different
mix read and write ratios. RAW and HDD are better. QCOW2
also performs well if read and write mix are equal. Latency of
RAW and QCOW2 is less for read operations. While latency of
VDI and VMDK is less for write operations as shown in Fig.
17. As shown in Fig. 18, VDI and VMDK perform large no. of
sequential write I/O operations per second. VHD and HDD
perform moderate no. of operations. RAW and QCOW2 are
not preferable. VDI and VMDK execute large no. of random
RW I/O operations per second. QCOW2 and HDD perform
almost equal no. of I/O operations per second.
In this case preferable sequence of Virtual Hard Disks for
Random RW is VDI, VMDK, RAW, HDD, QCOW2 and
VHD.

Figure 21. Random RW(30), IOPS

For Random Read Write operations with different mix read


and write ratios, Fig.19, Fig. 20 and Fig. 21 explore bandwidth,
latency, and IOPS for different Virtual Hard Disks. Here the
ratio of reads and writes is 70:30. As shown in Fig. 19 VDI and
VMDK perform well in this case. Performance of RAW and
HDD is good. Latency of RAW and QCOW2 is less for read
operations. While latency of VDI and VMDK is less for write
operations and latency of VHD and HDD is very high for read
operations as shown in Fig. 20. As shown in Fig. 21, VDI and
VMDK perform large no. of sequential write I/O operations per
second. VHD and HDD perform moderate no. of operations.
HDD, RAW and QCOW2 are not preferable. VDI and VMDK
execute large no. of random RW I/O operations per second.
QCOW2 and HDD perform almost equal no. of I/O operations
per second. For different mix ratio scenario performance of
Figure 19. Random RW(30), Bandwidth QCOW2 is even slightly decreases. In such cases performance
of RAW disk is better.
Preferable sequence of Virtual Hard Disks for Random RW is
VDI, VMDK, RAW, HDD, QCOW2 and VHD
Overall, higher values of bandwidth and IOPS and lower
values of latency constitute better results. Still, the preferable
sequence of virtual hard disks entirely depends on required
parameters. For e.g. one application purely concentrates on
time required to complete the operation. Then latency of virtual
hard disks will be an important factor, bandwidth and IOPS
will have less important. From above graphs we can conclude,
RAW and QCOW2 performs well in Sequential and Random
Read operation. VDI is well suitable for Sequential Write
operation. VMDK can also be better choice here. For Random
Write, VDI and VMDK are better if bandwidth and latency are
the factors. But when we consider only IOPS, RAW performs
well over VDI and VMDK. In Random Read Write (RW)
operation overall the performance of VDI and VMDK is better.
On the whole VDI is suitable in all types of write operations.
Figure 20. Random RW(30), Latency

978-1-4799-6393-5/14/$31.00 ©2014 IEEE


IEEE International Conference on Advances in Engineering & Technology Research (ICAETR - 2014),
August 01-02, 2014, Dr. Virendra Swarup Group of Institutions, Unnao, India

V. FUTURE WORK [11] Virtualbox VDI Image Storage. See


As a future work, we plan to expand the scope of this study B. Shah. “Disk performance of copy-on-write snapshot
further. First, we will include additional benchmarks e.g. data logical volumes”. PhD thesis, University Of British
intensive application like MySQL. Columbia, 2006.
[12] Liang Yang, Anthony F. Voellm. “Virtual Hard Disk
Second, we plan to perform experiments in cloud Performance”, A Microsoft White Paper Published:
environment i.e. NAS and SAN environment using NFS and March 2010
iSCSI streaming protocols. We are planning to do in-depth [13] http://www.storagereview.com/fio_flexible_i_o_tester_s
analysis of the root cause of the apparent runtime performance
ynthetic_benchmark
by studying the formation of virtual disk image formats.
[14] https://github.com/radii/fio/tree/master/examples
Finally, we plan to establish an analytical model for the [15] Xun Zhao, Yang Zhang, Yongwei Wu, Kang Chen, Jinlei
runtime performance. Jiang, Keqin Li, "Liquid: A Scalable Deduplication File
System for Virtual Machine Images," Parallel and
VI. CONCLUSION Distributed Systems, IEEE Transactions on , vol.25,
This research paper presents an empirical study of different no.5, pp.1257,1266, May 2014 doi:
types of virtual disk image formats with KVM on a local file 10.1109/TPDS.2013.173
system. We have selected I/O Microbenchmark workload to [16] Zhiming Shen, Zhe Zhang, Andrzej Kochut, Alexei
study the performance. Exhaustive experiments provide Karve, Han Chen, Minkyong Kim; Hui Lei, Nicholas
evidence that selection of the appropriate virtual hard disk can Fuller, “VMAR: Optimizing I/O Performance and
significantly increase the performance of the I/O operation Resource Utilization in the Cloud”
issued by the user. This study will definitely help the [17] en.wikipedia.org/wiki/Virtual_disk_image
developers whilst building new virtual hard disks or modifying
the existing ones.
REFERENCES
[1] Han Chen, Minkyong Kim, Zhe Zhang, Hui Lei,
“Empirical Study of Application Runtime Performance
using On-demand Streaming Virtual Disks in the Cloud”.
In Proceedings of the Industrial Track of the 13th
ACM/IFIP/USENIX International Middleware
Conference (MIDDLEWARE '12). ACM, New York,
NY, USA, Article 5, 6 pages.
DOI=10.1145/2405146.2405151
http://doi.acm.org/10.1145/2405146.2405151
[2] Chunqiang Tang. 2011. FVD: a high-performance virtual
machine image format for cloud. In Proceedings of the
2011 USENIX conference on USENIX annual technical
conference (USENIXATC'11). USENIX Association,
Berkeley, CA, USA, 18-18.
[3] Daniel A. Menasc´e, “Virtualization: Concepts,
Applications, and Performance Modeling”.
[4] Kernel Virtual Machine (KVM): Best practices for KVM
Copyright IBM Corp. 2010, 2012
[5] I/O Microbenchmark tool. See
http://www.bluestop.org/fio/HOWTO.txt
[6] http://en.wikipedia.org/wiki/Virtualization
[7] The QCOW2 Image Format. See
https://people.gnome.org/~markmc/qcow-image-
format.html
[8] VirtualBox VDI Image Format. See
http://forums.virtualbox.org/viewtopic.php?t=8046.
[9] Microsoft VHD Image Format. See
http://technet.microsoft.com/en-
us/library/bb676673.aspx#EHB
[10] VMware Virtual Disk Format 1.1. See
http://www.vmware.com/technical-
resources/interfaces/VMDK.html.

978-1-4799-6393-5/14/$31.00 ©2014 IEEE

You might also like