Comparison of Open Source Virtualization Technology

download Comparison of Open Source Virtualization Technology

If you can't read please download the document

Transcript of Comparison of Open Source Virtualization Technology

Compilation of the Linux kernelNomalized time(kernel vanilla)Column K

Linux-VServer0.998325838

Xen0.960489092

OpenVZ0.929040242

KVM0.679027537

VirtualBox (-hwvirtex off) 0.455845628

KQEMU0.357186391

VirtualBox (-hwvirtex on)0.347273021

QEMU0.049036503

Virt. solutionVersionServer kernelVMs kernel

Host--2.6.22-14-server--

Linux-VServer2.2.0.52.6.22.14

Xen3.1.02.6.22-14-xen

KVM582.6.22-14-server2.6.15-26-amd64

OpenVZ5.12.6.22-ovz005

KQEMU1.3.0~pre11-62.6.22.14-kqemu2.6.15-26-amd64

VirtualBox1.5.4_OSE / 1.5.51_OSE2.6.22-14-server2.6.22.14

???Page ??? (???)2008-07-24, 10:52:10Page / Sysbench at scale: total throughputNumber of VMs (n)Throughput (transactions/sec)Linux-VServerXenKVMOpenVZVirtualBoxKQEMU

Row 371357.646666666667320.583333333333190.323333333333179.07101.73666666666734.53

Row 382438.926666666667535.866666666667248.22262.2170.35666666666759.32

Row 394589.913333333333552.126666666667369.496666666667299.75206.88333333333361.695

Row 408445.543333333333551.956666666667394.29253.74221.6359.9

Row 4116316.846666666667534.463333333333382.746666666667246.036666666667444.28333333333355.785

Row 423275.9333333333333489.386666666667247.533333333333222.451.#NAN48.035

nMmoire (Mb)

12039*

21622

4811

8405

16202

32101

???Page ??? (???)2008-06-11, 15:05:13Page / Sysbench at scale: average throughput per VMNumber of VMs (n)Throughput (transactions/sec)Linux-VServerXenKVMOpenVZVirtualBoxKQEMU

Row 461357.646666666667320.583333333333190.323333333333179.07101.73666666666734.53

Row 472219.463333333333267.933333333333124.11131.185.178333333333329.66

Row 484147.478333333333138.03166666666792.374166666666774.937551.720833333333315.42375

Row 49855.692916666666768.994583333333349.2862531.717527.703757.4875

Row 501619.802916666666733.403958333333323.921666666666715.377291666666727.76770833333333.4865625

Row 51322.3729166666666715.29333333333337.735416666666676.951562519.25068965517241.50109375

Kernel compilation (VMs average)number of VMs (n)time (seconds)12481632

Host318.91352.45683.031373.312757.465225.24

Vserver317.71364.16691.651359.82694.995326.72

Xen336.61360.39728.511492.953028.926737.14

Bzip2Normalized time(kernel vanilla)Column J

Xen0.92

Linux-VServer0.89

VirtualBox (-hwvirtex on)0.86

VirtualBox (-hwvirtex off)0.85

KVM0.85

KQEMU0.55

OpenVZ0.46

QEMU 0.08

DbenchNormalizes throughput(kernel vanilla)Column I

Linux-VServer0.98

Xen0.27

OpenVZ0.14

KVM0.12

KQEMU0.02

QEMU0.01

dd (copy of ISO file)Normalized throughput(kernel vanilla)Column K

Linux-VServer1.28625042647561

Xen0.895257591265779

KVM0.858751279426817

OpenVZ0.169225520300239

dd (60G /dev/zero --> /dev/null)Normalized throughput(kernel vanilla)Column M

Linux-VServer0.923597294070832

KVM0.872662156784719

OpenVZ0.517707918822125

Xen0.276561878233187

Netperf (TCP Stream Test)Normalized throughput(kernel vanilla)Column L

Virtual Box (-hwvirtex off)1.07224490812211

Virtual Box (-hwvirtex on)1.02819829689756

Linux-VServer1.0121009909865

Xen0.994671580100593

OpenVZ0.983280215128729

QEMU (-no-kqemu)0.243949504506748

KQEMU0.221291270355062

KQEMU (-kernel-kqemu)0.169027936855734

KVM0.112232956526069

Rsync (kernel tree)Normalized time(kernel vanilla)Column J

OpenVZ0.981420276

VirtualBox (-hwvirtex on)0.865438829

VirtualBox (-hwvirtex off)0.863446988

Linux-VServer0.845819609

Xen0.838253904

QEMU0.791363218

KQEMU (-kernel-kqemu)0.790709237

KQEMU0.781758706

KVM0.712198845

Rsync (ISO file)Normalized time(kernel vanilla)Column J

OpenVZ1.008587217

VirtualBox (-hwvirtex on)0.883403878

KQEMU0.874302366

VirtualBox (-hwvirtex off)0.870757738

Linux-VServer0.855076583

KQEMU (-kernel-kqemu)0.853307748

Xen0.840683479

QEMU0.74235702

KVM0.449269384

SysbenchNormalized throughput(kernel vanilla)Column C

Linux-VServer0.938197663316147

Xen0.835928684269916

KVM0.501501657899762

OpenVZ0.468404423380727

VirtualBox (-hwvirtex on)0.273267017342847

VirtualBox (-hwvirtex off) 0.245568807180182

KQEMU0.0845962883877576

KQEMU (-kernel-kqemu)0.0714198913251046

QEMU0.046664236237696

Fernando Laudares CamargosGabriel GirardBenoit des Ligneris, Ph. [email protected]

Comparative study of Open Source virtualization & contextualization technologies

Context (1)

Introduction

Why virtualize the server infrastructure

Virtualization technologies

The experiments

Explanations & anomalies

Which technology is best for ... you ?

Context (2)

Research executed by Fernando L. Camargos in pursuit of his Masters degree in Computer Science under the direction of Gabriel Girard (Universit de Sherbrooke) and Benot des Ligneris (Rvolution Linux)

This being a research work, some questions remain unsolved... maybe you can help !

Why virtualize the server infrastructure (1) ?

Server consolidation is the most mentionned argument

Why virtualize the server infrastructure (2) ?

Reduction of the purchase and maintenance costs

Compatibility with legacy applications and OSs

Security : environment to execute untrusty applications

Low cost environment for software development

Centralized control/management

Easy backup/restore procedures

Live migration

Quick server fail-over

High availability

Virtual appliances

Controled sharing of ressources

Cloud computing

Hardware abstraction

It's ... cool !

Pourtant, les avantages de la virtualisation ne se traduisent pas toujours par une conomie d'argent mais il existe d'autres arguments favorables virtualisation :

Full virtualization

Para-virtualization

OS-level virtualization (contextualization)

Hardware emulation

Binary translation

Classic virtualization

Virtualization technologies (1)

Tu peux aussi parles des Solaris Zones (et Containers).

Full virtualization

Para-virtualization

OS-level virtualization

(contextualization/

containers)

Hardware emulation

binary translation

Classic virtualization

Xen

Linux-VServer

OpenVZ

KVM

VirtualBox

KQEMU

Virtualization technologies (2)

virtualization != emulation

QEMU is an emulator

Virtualization technologies (3)

Virtualization technologies (4)

Virtualization technologiespartial emulationno emulation

KQEMUKVMVirtualBoxOpenVZXen (Linux)Linux-VServer

2 types of hypervisors:

Hypervisors type I: KVM, Xen

Hypervisors type II: VirtualBox, KQEMU

Virtualization technologies (5)

The experiments (1)

virtualization layer

overhead

But of how much ?

To discover, we need to mesure the efficiency of the virtualization technologies

efficiency = performance + scalability

where :

Performance (overhead): one virtual machine only

Scalability: several virtual machines

2 types of experiments:

The experiments (2)

Virtualization solutions evaluated in this study

Chosen OSs:

Host: Ubuntu 7.10

VMs: Ubuntu 6.06

Test bed:

Intel Core 2 Duo 6300, 1.86GHz (x86_64 / VT-x)

4G Memory

Hard drive SATA 80G

The experiments (3)

64 bit kernel for all technologies

Use of VT extension for KVM, Xen

32 bit VM for VirtualBox

Identical memory allocation per VM for every technology but Vserver : 2039 Mo

Bits & Bytes & VMs :

The experiment (4)

7 benchmarks (different workloads)

Reference : executed in the Linux host (scale = 1)

executed inside the virtual machines

4 execution sets

results = the average of the 3 last sets normalized by

the result obtained by the Linux host

Methodology :

The experiments Performance (1)

An equilibrated workload a little bit of everything without stressing one particular ressource too much

Metric: given time for the completion of the compilation

Compilation of the Linux kernel

tar xvzf linux-XXX.tar.gzcd linux-XXXmake defconfig # ("New config with default answer to all options")---date +%s.%N && make && date +%s.%N...make cleandate +%s.%N && make && date +%s.%N...

3x

The experiments Performance (2)

The experiments Performance (3)

Software for file compression

Using option that yields maximal compression which considerably increases the memory utilisation per process

Metric: given time for the completion of the compression

Bzip2

cd /var/tmpcp /home/fernando/Iso/ubuntu-6.06.1-server-i386.iso .date +%s.%N && bzip2 -9 ubuntu-6.06.1-server-i386.iso && date +%s.%Nrm ubuntu-6.06.1-server-i386.iso.bz2...

4x

The experiments Performance (4)

The experiments Performance (5)

Derived from the Netbench benchmark

Emulates the load imposed in a file server by n Windows 95 clients

n(umber of clients)=100, t(ime)=300

Metric: throughput (Mb/sec)

Dbench

/usr/local/bin/dbench -t 300 -D /var/tmp 100 # 4x

The experiments Performance (6)

* no results for VirtualBox

The experiments Performance (7)

Application for low level (bit by bit) data copy

Mesures the performance of the I/O system (hard drive access)

2 tests:

copy of a single big file

copy of 60G of /dev/zero to /dev/null

Metric : throughput

dd

...

dd if=/opt/iso/ubuntu-6.06.1-server-i386.iso of=/var/tmp/out.iso...dd if=/dev/zero of=/dev/null count=117187560 # 117187560 = 60G...rm -fr /var/tmp/* # between execution sets...

The experiments Performance (8)

* no results for KQEMU nor VirtualBox

The experiments Performance (9)

* no results for KQEMU nor VirtualBox

The experiments Performance (10)

A benchmark that can be used to measure several aspects of the network performance

TCP Stream test: measure the speed of the exchange of TCP packets through the network (10 sec.)

Metric: throughput (bits/sec)

Netperf

netserver # in the server..netperf -H # in the client, 4x

The experiments Performance (11)

The experiments Performance (12)

Similar to Netperf's TCP Stream Test, measures the performance of file exchange through the network

2 tests:

ISO file: 1 big file (433M)

Linux kernel tree: several small files (294M)

Metric: time (sec.)

Rsync

..date +%s.%N && rsync -av ::kernel /var/tmp && date +%s.%N...date +%s.%N && rsync -av ::iso /var/tmp && date +%s.%N...rm -fr /var/tmp/* # between execution sets...

The experiments Performance (13)

The experiments Performance (14)

The experiments Performance (15)

Measures the performance of a DB server

Workload centered in I/O operations in the file system

Metric: throughput (transactions/sec)

Sysbench

sysbench --test=oltp --mysql-user=root --mysql-host=localhost --debug=off prepare # (1x)sysbench --test=oltp --mysql-user=root --mysql-host=localhost --debug=off run # (4x)

On-Line Transaction Processing

OLTP test statistics: queries performed: read: 140000 write: 50000 other: 20000 total: 210000 transactions: 10000 (376.70 per sec.) deadlocks: 0 (0.00 per sec.) read/write requests: 190000 (7157.28 per sec.) other operations: 20000 (753.40 per sec.)

The experiments Performance (16)

The experiments Performance (17)

Conclusion :

Linux-VServer: excellent performance. Has presented minimal to no overhead when compared to Linux.

Xen: has shown a great performance in all but the Dbench benchmark (I/O bound benchmark).

KVM's performance was fairly good for a full virtualization solution but should be avoided to run applications that strongly rely on I/O.

The experiments Performance (18)

Conclusion (cont) :

OpenVZ has shown a very variable performance (from weak to excellent). Certainly because of accounting for I/O and because of some network optimization for the Network related tests.

VirtualBox has presented a good performance for the file compression and network based benchmarks. Poor performance for all the other situations.

KQEMU has shown a poor performance for all benchmarks. It is clear that this virtualization solution does not make a good use of the available ressources and its application in production servers should be avoided.

The experiments Performance (19)

1 benchmark (Sysbench, kernel compilation) executed by n VMs concurrently

n = 1, 2, 4, 8, 16 et 32

4 execution sets:

results = average of the last tree execution sets

Memory allocation per VM :

Methodology:

* 1536 Mb (KQEMU)

The experiments Scalability (1)

The experiments Scalability (2)

The experiments Scalability (3)

The experiments Scalability (4)

The experiments Scalability (5)

Conclusion :

The efficiency of virtualization solutions is strongly related to the number of VMs executing concurrently (scalability)

Most of the time, one additional VM helps to get the maximum performance out of a given server (link with the number of CPU)

More decreases performance as a bottleneck is limiting performance (CPU/core number important !)

Linux-VServer has shown the best global performance for up to 5-7 VMs.

Xen has proved to be the best full virtualization based solution.

Conclusion - Scalability (1)

KVM has shown a reasonable performance for a full virtualization solution.

OpenVZ's performance was not what we would expect of a contextualization solution. Our hypothesis is that the accounting (beancounter) is the root cause of th overhead

VirtualBox has shown an impressive performance, the total throughput has more than doubled when the number of VMs has pass from 8 to 16. However, we were unaible to execute this experiment with 32 VirtualBox VMs executing concurrently.

KQEMU's performance was weak when compared to all other solutions, independently of the number of VMs in execution.

Conclusion Scalability (2)

KQEMU utilise une couche de virtualisation sur l'mulateur QEMU, de manire interprter une partie des instructions au lieu de l'ensemble des instructions. Pourtant, les rsultats indiquement des gains de performance faibles.

Expliquer la faible performance d'OpenVZ

D'o vient la faible performance rseau de KVM ? N'oublies pas que l'image de la VM KVM est la mme de QEMU (et de KQEMU), c'est qui change est le VMM.

Which technology to use in each case ?
A technological point of view ...

OpenVZ : pure network related applications, thanks to the optimizations done in the network layer. Not indicated for I/O applications.

Linux-VServer: all kinds of situations (a priori).

Xen can also be used in all kinds of situations but requires important modifications in the guest OS kernels OR the use of the VT virtualization instructions.

KVM and VirtualBox have proven to be good options for development environments.

KQEMU has shown weak performance. It is indicated for development only.

Results are very different for every benchmark/technology ,so benchmark the technology you plan to use with your own mission critical application BEFORE virtualizing your servers (ex: File Servers benchmark).

Only Xen is actually supported by the industry (RedHat, SuSE, Mandrake, IBM, etc.)

KVM is available in the standard Linux kernel : Yeah! But poor performance overall ;-(

Linux-VServer and OpenVZ: they both need a modified kernel that is not officially supported by the afore mentioned giants of the industry but . . .

Conclusion : Which technology to use in each case ?

Since last OLS, key players like IBM, Intel, and Google are working hard to include a container based technology in the Linux kernel

Lots of patches from OpenVZ gets integrated into the kernel recently and everyone expects that we will have Really Soon Now a contextualization in the linux kernel without the need for any kernel hacking

We strongly believe that the integration of a contextualization/container solution is the best way to go for Linux-on-Linux virtualization needs

It will offer VMWare a very strong and completely open-source competition

Future / contextualisation

Titre

Cliquez pour diter le format du plan de texte

Second niveau de plan

Troisime niveau de plan

Quatrime niveau de plan

Cinquime niveau de plan

Sixime niveau de plan

Septime niveau de plan

Huitime niveau de plan

Neuvime niveau de plan

Rvolution Linux 2008. Ce document est confidentiel.

Cliquez pour diter le format du texte-titre

Cliquez pour diter le format du plan de texte

Second niveau de plan

Troisime niveau de plan

Quatrime niveau de plan

Cinquime niveau de plan

Sixime niveau de plan

Septime niveau de plan

Huitime niveau de plan

Neuvime niveau de plan

Rvolution Linux 2008. Ce document est confidentiel.