Measuring PROOF Lite performance in (non)virtualized environment Ioannis Charalampidis, Aristotle...

39
Measuring PROOF Lite performance in (non)virtualized environment Ioannis Charalampidis, Aristotle University of Thessaloniki Summer Student 2010

Transcript of Measuring PROOF Lite performance in (non)virtualized environment Ioannis Charalampidis, Aristotle...

Page 1: Measuring PROOF Lite performance in (non)virtualized environment Ioannis Charalampidis, Aristotle University of Thessaloniki Summer Student 2010.

Measuring PROOF Lite performance in (non)virtualized environment

Ioannis Charalampidis, Aristotle University of ThessalonikiSummer Student 2010

Page 2: Measuring PROOF Lite performance in (non)virtualized environment Ioannis Charalampidis, Aristotle University of Thessaloniki Summer Student 2010.

Overview

• Introduction•Benchmarks: Overall execution time•Benchmarks: In-depth analysis•Conclusion

Page 3: Measuring PROOF Lite performance in (non)virtualized environment Ioannis Charalampidis, Aristotle University of Thessaloniki Summer Student 2010.

What am I looking for?

•There is a known overhead caused by the virtualization process▫How big is it?▫Where is located?▫How can we minimize it?▫Which hypervisor has the best

performance?• I am using CernVM as guest

Page 4: Measuring PROOF Lite performance in (non)virtualized environment Ioannis Charalampidis, Aristotle University of Thessaloniki Summer Student 2010.

What is CernVM?

• It’s a baseline Virtual Software Appliance for use by LHC experiments

• It’s available for many hypervisors

Page 5: Measuring PROOF Lite performance in (non)virtualized environment Ioannis Charalampidis, Aristotle University of Thessaloniki Summer Student 2010.

How am I going to find the answers?•Using as benchmark a standard data

analysis application (ROOT + PROOF Lite)

•Test it on different hypervisors •And on varying number of workers/CPUs

•Compare the performance (Physical vs. Virtualized)

Page 6: Measuring PROOF Lite performance in (non)virtualized environment Ioannis Charalampidis, Aristotle University of Thessaloniki Summer Student 2010.

Problem• The benchmark application requires too

much time to complete ( 2 min ~ 15 min )▫At least 3 runs are required for reliable results▫The in-depth analysis overhead is about 40%▫It is not efficient to perform detailed analysis for

every CPU / Hypervisor configuration

Create the overall execution time benchmarks

Find the best configuration to run the traces on

Page 7: Measuring PROOF Lite performance in (non)virtualized environment Ioannis Charalampidis, Aristotle University of Thessaloniki Summer Student 2010.

Benchmarks performed• Overall time▫Using time utility and automated batch scripts

• In-depth analysis▫Tracing system calls using

Strace KernelTAP

▫Analyzing the trace files using applications I wrote BASST (Batch analyzer based on STrace) KARBON (General purpose application profiler

based on trace files)

Page 8: Measuring PROOF Lite performance in (non)virtualized environment Ioannis Charalampidis, Aristotle University of Thessaloniki Summer Student 2010.

Process description and results

Page 9: Measuring PROOF Lite performance in (non)virtualized environment Ioannis Charalampidis, Aristotle University of Thessaloniki Summer Student 2010.

Benchmark Configuration• Base machine

▫ Scientific Linux CERN 5• Guests

▫ CernVM 2.1• Software packages from SLC repositories

▫ Linux Kernel 2.6.18-194.8.1.el5▫ XEN 3.1.2 + 2.6.18-194.8.1.el5▫ KVM 83-194.8.1.el5▫ Python 2.5.4p2 (from AFS)▫ ROOT 5.26.00b (from AFS)

• Base machine hardware▫ 24 x Intel Xeon X7460 2.66GHz with VT-x Support (64 bit)▫ No VT-d nor Extended Page Tables (EPT) hardware support▫ 32G RAM

Page 10: Measuring PROOF Lite performance in (non)virtualized environment Ioannis Charalampidis, Aristotle University of Thessaloniki Summer Student 2010.

Benchmark Configuration• Virtual machine configuration▫ 1, 2 to 16 CPUs with 2 CPU step▫ <CPU#> + 1Gb RAM for Physical disk and Network tests▫ <CPU#> + 17Gb RAM for RAM Disk tests▫ Disk image for the OS▫ Physical disk for the Data + Software

• Important background services running▫ NSCD (Caching daemon)

Page 11: Measuring PROOF Lite performance in (non)virtualized environment Ioannis Charalampidis, Aristotle University of Thessaloniki Summer Student 2010.

Benchmark Configuration

•Caches were cleared before every test▫Page cache, dentries and inodes▫Using the /proc/sys/vm/drop_caches flag

•No swap memory was used▫By periodically monitoring the free memory

Page 12: Measuring PROOF Lite performance in (non)virtualized environment Ioannis Charalampidis, Aristotle University of Thessaloniki Summer Student 2010.

Automated batch scripts• The VM batch script runs on

the host machine• It repeats the following

procedure:▫Crate a new Virtual Machine▫Wait for the machine to finish

booting▫Connect to the controlling

script inside the VM▫Drop caches both on the host

and the guest▫Start the job▫Receive and archive the results

Client

Server

Hypervisor

Benchmark

Benchmark

Benchmark

Page 13: Measuring PROOF Lite performance in (non)virtualized environment Ioannis Charalampidis, Aristotle University of Thessaloniki Summer Student 2010.

Problem• There was a bug on PROOF Lite that was

looking up a non-existing hostname during the startup of each worker Example : 0.2-plitehp24.cern.ch-

1281241251-1271• Discovered by detailed system call tracing

The hostname couldn’t be cached The application had to wait for the timeout The startup time was delayed randomly Call tracing applications made this delay even

bigger virtually hanging the application

Page 14: Measuring PROOF Lite performance in (non)virtualized environment Ioannis Charalampidis, Aristotle University of Thessaloniki Summer Student 2010.

Problem

•The problem was resolved with:▫ A minimal DNS proxy was developed that fakes the

existence of the buggy hostname▫ It was later fixed in PROOF source

Application DNS ServerFake DNS

Proxy

cernvm.cern.ch?137.138.234.20

x.x-xxxxxx-xxx-xxx?

127.0.0.1

Page 15: Measuring PROOF Lite performance in (non)virtualized environment Ioannis Charalampidis, Aristotle University of Thessaloniki Summer Student 2010.

ProblemExample: Events / sec for different CPU settings, as reported by the buggy benchmark

Before After

Page 16: Measuring PROOF Lite performance in (non)virtualized environment Ioannis Charalampidis, Aristotle University of Thessaloniki Summer Student 2010.

Results – Physical Disk

Page 17: Measuring PROOF Lite performance in (non)virtualized environment Ioannis Charalampidis, Aristotle University of Thessaloniki Summer Student 2010.

Results – Network (XROOTD)

Page 18: Measuring PROOF Lite performance in (non)virtualized environment Ioannis Charalampidis, Aristotle University of Thessaloniki Summer Student 2010.

Results – RAM Disk

Page 19: Measuring PROOF Lite performance in (non)virtualized environment Ioannis Charalampidis, Aristotle University of Thessaloniki Summer Student 2010.

Results – Relative valuesRAM Disk Network

(XROOTD)Physical Disk

Bare metal

KVM

XEN

Page 20: Measuring PROOF Lite performance in (non)virtualized environment Ioannis Charalampidis, Aristotle University of Thessaloniki Summer Student 2010.

Results – Absolute valuesRAM Disk Network

(XROOTD)Physical Disk

Bare metal

KVM

XEN

Page 21: Measuring PROOF Lite performance in (non)virtualized environment Ioannis Charalampidis, Aristotle University of Thessaloniki Summer Student 2010.

Results – Comparison chart

Page 22: Measuring PROOF Lite performance in (non)virtualized environment Ioannis Charalampidis, Aristotle University of Thessaloniki Summer Student 2010.

Procedure, problems and results

Page 23: Measuring PROOF Lite performance in (non)virtualized environment Ioannis Charalampidis, Aristotle University of Thessaloniki Summer Student 2010.

In depth analysis• In order to get more details the program

execution was monitored and all the system calls were traced and logged

•Afterwards, the analyzer extracted useful information from the trace files such as▫Detecting the time spent on each system

call▫Detecting the filesystem / network activity

•The process of tracing adds some overhead but it is cancelled out from the overall performance measurement

Page 24: Measuring PROOF Lite performance in (non)virtualized environment Ioannis Charalampidis, Aristotle University of Thessaloniki Summer Student 2010.

System call tracing utilities• STrace

▫ Traces application-wide system calls from user space

▫ Connects to the tracing process using the ptrace() system call and monitors it’s activity

• Advantages▫ Traces the application’s

system calls in real time▫ Has very verbose output

• Disadvantages▫ Creates big overhead

Process

Kernel

STrace

Page 25: Measuring PROOF Lite performance in (non)virtualized environment Ioannis Charalampidis, Aristotle University of Thessaloniki Summer Student 2010.

System call tracing utilities• SystemTAP

▫ Traces system-wide kernel activity, asynchronously

▫ Runs as a kernel module• Advantages

▫ Can trace virtually everything on a running kernel

▫ Supports scriptable kernel probes

• Disadvantages▫ It is not simple to extract

detailed information▫ System calls can be lost on

high CPU activity

Process

Kernel

System TAP

Page 26: Measuring PROOF Lite performance in (non)virtualized environment Ioannis Charalampidis, Aristotle University of Thessaloniki Summer Student 2010.

System call tracing utilities

•Sample STrace output:5266 1282662179.860933 arch_prctl(ARCH_SET_FS, 0x2b5f2bcc27d0) = 0 <0.000005>5266 1282662179.860960 mprotect(0x34ca54d000, 16384, PROT_READ) = 0 <0.000007>5266 1282662179.860985 mprotect(0x34ca01b000, 4096, PROT_READ) = 0 <0.000006>5266 1282662179.861009 munmap(0x2b5f2bc92000, 189020) = 0 <0.000011>5266 1282662179.861082 open("/usr/lib/locale/locale-archive", O_RDONLY) = 4 <0.000008>5266 1282662179.861113 fstat(4, {st_mode=S_IFREG|0644, st_size=56442560, ...}) = 0 <0.000005>5266 1282662179.861166 mmap(NULL, 56442560, PROT_READ, MAP_PRIVATE, 4, 0) = 0x2b5f2bcc3000 <0.000007>5266 1282662179.861192 close(4) = 0 <0.000005>5266 1282662179.861269 brk(0) = 0x1ad1f000 <0.000005>5266 1282662179.861290 brk(0x1ad40000) = 0x1ad40000 <0.000006>5266 1282662179.861444 open("/usr/share/locale/locale.alias", O_RDONLY) = 4 <0.000009>5266 1282662179.861483 fstat(4, {st_mode=S_IFREG|0644, st_size=2528, ...}) = 0 <0.000005>5266 1282662179.861944 read(4, "", 4096) = 0 <0.000006>5266 1282662179.861968 close(4) = 0 <0.000005>5266 1282662179.861989 munmap(0x2b5f2f297000, 4096) = 0 <0.000009>5264 1282662179.863063 wait4(-1, 0x7fff8d813064, WNOHANG, NULL) = -1 ECHILD (No child processes)...

Page 27: Measuring PROOF Lite performance in (non)virtualized environment Ioannis Charalampidis, Aristotle University of Thessaloniki Summer Student 2010.

KARBON – A trace file analyzer

Page 28: Measuring PROOF Lite performance in (non)virtualized environment Ioannis Charalampidis, Aristotle University of Thessaloniki Summer Student 2010.

KARBON – A trace file analyzer• Is a general purpose application profiler

based on system call trace files• It traces file descriptors and reports detailed

I/O statistics for files, network sockets and FIFO pipes

• It analyzes the child processes and creates process graphs and process trees

• It can detect the “Hot spots” of an application•Custom analyzing tools can be created on-

demand using the development API

Page 29: Measuring PROOF Lite performance in (non)virtualized environment Ioannis Charalampidis, Aristotle University of Thessaloniki Summer Student 2010.

KARBON – Application block diagram

Preprocessing Tool

AnalyzerFilte

r

Presenter

Presenter

Page 30: Measuring PROOF Lite performance in (non)virtualized environment Ioannis Charalampidis, Aristotle University of Thessaloniki Summer Student 2010.

Results

•Time utilization of the traced application

Page 31: Measuring PROOF Lite performance in (non)virtualized environment Ioannis Charalampidis, Aristotle University of Thessaloniki Summer Student 2010.

Results

•Time utilization of the traced application

Page 32: Measuring PROOF Lite performance in (non)virtualized environment Ioannis Charalampidis, Aristotle University of Thessaloniki Summer Student 2010.

Results

•Time utilization of the traced application

Page 33: Measuring PROOF Lite performance in (non)virtualized environment Ioannis Charalampidis, Aristotle University of Thessaloniki Summer Student 2010.

Results

•Overall system call time for filesystem I/O

• Reminder: Kernel buffers were dropped before every test▫Possible caching effect inside the hypervisor

[ms] Reading Writing Seeking Total

Bare metal

490,861.354 2,054.354 21,594.583 524,872.823

KVM 38,391.715 36,422.440 122,769.518 244,406.512

XEN 38,111.980 20,930.382 102,769.901 210,247.468

Page 34: Measuring PROOF Lite performance in (non)virtualized environment Ioannis Charalampidis, Aristotle University of Thessaloniki Summer Student 2010.

Results

•Overall system call time for UNIX Sockets

[ms] Receiving Sending Bind, Listen

Connecting

Total

Bare metal

993.884 10,313.304 4.251 5.259 11,301.588

KVM 59,637.942 164,655.077 7.412 13.656 223,872.164

XEN 97,823.986 550,050.484 5.014 8.493 652,784.010

Page 35: Measuring PROOF Lite performance in (non)virtualized environment Ioannis Charalampidis, Aristotle University of Thessaloniki Summer Student 2010.

Results

•Most time-consuming miscellaneous system calls

System call Bare metal KVM XEN

wait4() 178,200.34 316,829,30 388,885,57gettimeofday() (No trace) 219,780,33 218,018,63nanosleep() (No trace) 12,250,12 12,029,30time() (No trace) (No trace) 9,081,94

rt_sigreturn() 150,943 1,685,285 9,271,061setitimer() 23,245 698,785 223,669

Page 36: Measuring PROOF Lite performance in (non)virtualized environment Ioannis Charalampidis, Aristotle University of Thessaloniki Summer Student 2010.

Conclusion•Physical Disk▫KVM can achieve better performance than XEN,

reaching 70 - 98% of the native speed▫Best performance achieved on 6 CPUs/6

workers (7Gb RAM) with 81% of the native speed

•Network (Xrootd)▫XEN can achieve better performance than KVM,

reaching 73 - 90% of the native speed▫Best performance achieved again on 6 CPUs / 6

workers (7G RAM) with 92% of the native speed

Page 37: Measuring PROOF Lite performance in (non)virtualized environment Ioannis Charalampidis, Aristotle University of Thessaloniki Summer Student 2010.

Conclusion•Some disk I/O operations (read) appear to be faster

inside the Virtual Machine•Some of them appear to be slower (seek, write)▫Possible caching effect even on direct disk access

•Network I/O▫ TCP under XEN looks fine, whereas with KVM there are some issues▫ UNIX Sockets seem to have significant penalty inside the VMs

•Some miscellaneous system calls take longer inside the VM▫Time-related functions (gettimeoftheday, nanosleep)

Used for paravirtualized implementation of other system calls?

Page 38: Measuring PROOF Lite performance in (non)virtualized environment Ioannis Charalampidis, Aristotle University of Thessaloniki Summer Student 2010.

Other uses of the tools

•SystemTAP could be used by nightly builds in order to detect hanged applications

•KARBON can be used as a general log file analysis program

Page 39: Measuring PROOF Lite performance in (non)virtualized environment Ioannis Charalampidis, Aristotle University of Thessaloniki Summer Student 2010.

Future work• Benchmark VMs with a disk image file residing on a RAID

Array• Benchmark many concurrent KVM virtual machines with

total memory exceed the overall system memory – Exploit NPT

• Test the PCI Pass-through for network cards (KVM) – Test VT-d

• Convert the benchmark application from python to pure C• Repeat the benchmarks with the optimized ROOT input

files• Test again the KVM Network performance with • Recompile the kernel with CONFIG_KVM_CLOCK