DISTRIBUTED SYSTEMS RESEARCH GROUP CHARLES UNIVERSITY, PRAGUE Faculty of Mathematics and Physics...

Post on 24-Dec-2015

212 views 0 download

Transcript of DISTRIBUTED SYSTEMS RESEARCH GROUP CHARLES UNIVERSITY, PRAGUE Faculty of Mathematics and Physics...

DISTRIBUTED SYSTEMS RESEARCH GROUPhttp://nenya.ms.mff.cuni.cz

CHARLES UNIVERSITY, PRAGUEFaculty of Mathematics and Physics

Generic Environment for Full Automation of Benchmarking

Tomáš Kalibera, Lubomír Bulej, Petr Tůma

Tomáš KaliberaSOQUA 2004, Erfurt, Germany

History: Middleware Benchmarking Projects

• Vendor testing Borland, IONA, MLC Systeme

• Open source testing omniORB, TAO, OpenORB, …

• Open CORBA Benchmarking anyone can upload their own results

Tomáš KaliberaSOQUA 2004, Erfurt, Germany

• Correctness tests commonly used detect bugs

Motivation: Regression Testing for Performance

• Regression testing integrated into development environment tests performed regularly

• Performance tests in research stage detect performance

regressions

Tomáš KaliberaSOQUA 2004, Erfurt, Germany

Regression Benchmarking

• Detection of performance regressions benchmarking performance of consecutive

versions of software automatic comparison of results

• Issues automatic comparison of results

• fluctuations in results• results format, different level of detail

automatic running of benchmarks• monitoring, failure resolution

Tomáš KaliberaSOQUA 2004, Erfurt, Germany

Steps of Running a Benchmark

buildbenchmark

downloadapplication

buildapplication

downloadbenchmark

deploy execute

monitor

collectresults

resultrepository

Tomáš KaliberaSOQUA 2004, Erfurt, Germany

Generic Benchmarking Environment

• Automated processing monitoring, handling of failures management tools

• Common form of results allow benchmark independent analysis raw data, system configuration

• Flexibility benchmark and analysis independence

Tomáš KaliberaSOQUA 2004, Erfurt, Germany

Automatic Downloading and Building

• Download methods cvs checkout, http, ftp

• Build methods Ant, make, scripts support different

platforms

• Software repository storage for sources,

binaries annotated for future

reference

buildbenchmark

downloadapplication

buildapplication

downloadbenchmark

Tomáš KaliberaSOQUA 2004, Erfurt, Germany

Automatic Deployment

• Reproducibility• Platform dependencies

CPU type operating system

• Resource requirements CPU frequency RAM

• Software requirements database server web server

deploy

Tomáš KaliberaSOQUA 2004, Erfurt, Germany

Automatic Execution

• Multiple applications run in correct order wait for initialization

• Monitoring detect crashes detect deadlocks but do not distort the results !

execute

monitor

Tomáš KaliberaSOQUA 2004, Erfurt, Germany

Architecture of Benchmarking Environment

• Task processing system deployment, execution, monitoring of tasks task scheduler – dependencies on other

tasks, checkpoints jobs, services

• Environment tasks result repository, software repository

• Benchmarking tasks benchmarks, compilations, required apps

Tomáš KaliberaSOQUA 2004, Erfurt, Germany

Example: RUBiS Benchmark

PROCESS:MySQLServer

TASK:Database

(service,up)

TASK:EJB Server(service,up)

TASK:Deploy Beans(job,running)

TASK:Fill Database

(job,done)

TASK:Compile Beans

(job,done)

wait_for_done

wait_for_up

wait_for_upwait_for_up

TASK:Client Emulator(job,prepared)

wait_for_done

wait_for_done

PROCESS:Jonas EJB

Server

PROCESS:Client Emulator

TASK:Result Rep.(service,up)

wait_for_up

TASK:Resource Mgmt.

(service,up)

CONTROL HOST SERVER HOST

CLIENT HOST

TASK PROCESSING SYSTEM

TASK PROCESSING SYSTEM

TASK PROCESSING SYSTEM

Tomáš KaliberaSOQUA 2004, Erfurt, Germany

Conclusion & Future Work

• Generic benchmarking environment automatic running of (existing) benchmarks common form of results, result repository

• Current status early implementation phase

• Future work support for Xampler, RUBiS benchmarks automatic detection of regressions regression benchmarking of CORBA, EJB

Tomáš KaliberaSOQUA 2004, Erfurt, Germany

Publications

• Bulej L., Kalibera T., Tůma P.: Repeated Results Analysis for Middleware Regression Benchmarking, accepted for publication in Special Issue on Performance Modeling and Evaluation of High-Performance Parallel and Distributed Systems, in Performance Evaluation: An International Journal, Elsevier

• Bulej, L., Kalibera, T., Tůma, P.: Regression Benchmarking with Simple Middleware Benchmarks, in proceedings of IPCCC 2004, International Workshop on Middleware Performance, Phoenix, AZ, USA

• Buble, A., Bulej, L., Tůma, P.: CORBA Benchmarking: A Course With Hidden Obstacles, in proceedings of the IPDPS Workshop on Performance Modeling, Evaluation and Optimization of Parallel and Distributed Systems (PMEOPDS 2003), Nice, France

• Tůma, P., Buble, A.: Open CORBA Benchmarking, in proceedings of the 2001 International Symposium on Performance Evaluation of Computer and Telecommunication Systems (SPECTS 2001), published by SCS, Orlando, FL, USA