SPEC Research Group Newsletter, volume 1 issue 1...

13
SPEC Research SM Group Newsletter Volume 1, Issue 1 September 2012 Research CONTACT Standard Performance Evaluation Corporation (SPEC) 7001 Heritage Village Plaza, Suite 225 Gainesville, VA 20155, USA SPEC Research Group Chair: Samuel Kounev ([email protected]) Web: http://research.spec.org ® CONTENTS OF THIS ISSUE 2 SPEC Research Group Officers 2 SPEC Research Working Groups 2 Welcome to the SPEC Research Group Newsletter 2 SPEC Research Group Mission Statement 3 SPEC Announcements 5 New SPEC Research Working Group: Bench- marking architectures for intrusion detection in virtualized environments 5 Face-to-Face Meeting of SPEC Research Group Steering Committee 6 SPEC Distinguished Dissertation Award Winners 2011 8 Call-for-Nominations: SPEC Distinguished Dissertation Award 2012 8 Kieker: A Framework for Application Performance Monitoring and Dynamic Software Analysis 9 Faban: A Platform for Performance Testing of Software 9 Report from the 3 rd ACM/SPEC International Con- ference on Performance Engineering (ICPE 2012) 10 Announcement of ICPE 2013 10 ICPE 2012: Best Paper Awards 11 ICPE 2012: Abstracts of Selected Papers 13 Call for Papers: 4 th ACM/SPEC International Con- ference on Performance Engineering (ICPE 2013) CALL FOR NOMINATIONS – SPEC DISTINGUISHED DISSERTATION AWARD The SPEC Distinguished Dissertation Award (formally called SPEC Benchmarking Research PhD Award) aims to recognize outstanding doctoral disser- tations within the scope of the SPEC Research Group in terms of scientific originality, scientific significance, practical relevance, impact, and quality of the presenta- tion. The main prize – $1000 – will be awarded at the ICPE 2013 in Prague. Read more on page 8 NEW SPEC RESEARCH WORKING GROUP SPEC Research announces new working group: “Benchmarking architectures for intrusion detection in virtualized environments“. The goal of the working group is to foster and facilitate innovative research through exchange of ideas and experiences. On a long-term basis, the working group aims to develop a customizable and flexible IDS benchmarking framework which employs representative cloud workloads, accu- rate metrics and rigorous benchmarking methodology. Read more on page 5 THE ‘BEST PAPER’ AWARDS FROM THE ICPE 2012 CONFERENCE At the 2012 International Conference on Performance Engineering (ICPE 2012) in Boston, USA, three outstanding papers were distinguished by the Program and Awards Chairs. The awards were presented in three different categories: Best Innovation, Industry-Related, and Student Paper Award. Read more on page 10 REPOSITORY OF PEER-REVIEWED TOOLS A repository of peer-reviewed tools for quantitative system evaluation and analysis is now available at the SPEC RG web site. Among the first tools to pass the review process are Faban and Kieker. Read more on page 4 1

Transcript of SPEC Research Group Newsletter, volume 1 issue 1...

Page 1: SPEC Research Group Newsletter, volume 1 issue 1 ...research.spec.org/fileadmin/user_upload/newsletter/... · SPEC ResearchSM Group Newsletter Volume 1, Issue 1 September 2012 ...

SPEC ResearchSM Group Newsletter

Volume 1, Issue 1 September 2012

℠Research

CONTACT

Standard Performance Evaluation Corporation (SPEC)7001 Heritage Village Plaza, Suite 225Gainesville, VA 20155, USA

SPEC Research GroupChair: Samuel Kounev ([email protected])Web: http://research.spec.org

®

CONTENTS OF THIS ISSUE2 SPECResearchGroupOfficers2 SPECResearchWorkingGroups2 WelcometotheSPECResearchGroupNewsletter2 SPECResearchGroupMissionStatement3 SPECAnnouncements5 New SPEC Research Working Group: Bench-

marking architectures for intrusion detection invirtualizedenvironments

5 Face-to-FaceMeetingofSPECResearchGroupSteeringCommittee

6 SPEC Distinguished Dissertation AwardWinners2011

8 Call-for-Nominations: SPEC DistinguishedDissertationAward2012

8 Kieker: A Framework for ApplicationPerformance Monitoring and DynamicSoftwareAnalysis

9 Faban: A Platform for Performance Testing ofSoftware

9 Reportfromthe3rdACM/SPECInternationalCon-ferenceonPerformanceEngineering(ICPE2012)

10 AnnouncementofICPE201310 ICPE2012:BestPaperAwards11 ICPE2012:AbstractsofSelectedPapers13 CallforPapers:4thACM/SPECInternationalCon-

ferenceonPerformanceEngineering(ICPE2013)

CALL FOR NOMINATIONS – SPEC DISTINGUISHED DISSERTATION AWARD The SPEC Distinguished Dissertation Award

(formally called SPEC Benchmarking Research PhDAward)aimstorecognizeoutstandingdoctoraldisser-tationswithinthescopeoftheSPECResearchGroupin terms of scientific originality, scientific significance,practicalrelevance,impact,andqualityofthepresenta-tion.Themainprize–$1000–willbeawardedattheICPE2013inPrague.

Read more on page 8

NEW SPEC RESEARCH WORKING GROUPSPEC Research announces new working group:

“Benchmarking architectures for intrusion detectionin virtualized environments“. The goal of the workinggroup is to foster and facilitate innovative researchthrough exchange of ideas and experiences. On along-termbasis, theworkinggroupaims todevelopacustomizableandflexibleIDSbenchmarkingframeworkwhich employs representative cloudworkloads, accu-ratemetricsandrigorousbenchmarkingmethodology.

Read more on page 5

THE ‘BEST PAPER’ AWARDS FROM THE ICPE 2012 CONFERENCE

At the 2012 International Conference onPerformance Engineering (ICPE 2012) in Boston,USA, three outstanding paperswere distinguished bythe Program and Awards Chairs. The awards werepresentedinthreedifferentcategories:BestInnovation,Industry-Related,andStudentPaperAward.

Read more on page 10

REPOSITORY OF PEER-REVIEWED TOOLS

A repositoryofpeer-reviewed tools forquantitativesystemevaluationandanalysisisnowavailableattheSPECRGwebsite.Among the first tools topass thereviewprocessareFabanandKieker.

Read more on page 4

1

Page 2: SPEC Research Group Newsletter, volume 1 issue 1 ...research.spec.org/fileadmin/user_upload/newsletter/... · SPEC ResearchSM Group Newsletter Volume 1, Issue 1 September 2012 ...

SPEC and the names SERT, SPECpower_ssj and SPEC PTDaemon are trademarks of the Standard Performance Evaluation Corporation. Additional product and service names mentioned herein may be the trademarks of their respective owners.

Copyright © 1988–2012 Standard Performance Evaluation Corporation (SPEC). All rights reserved.

2

SPEC RESEARCH GROUP OFFICERSChair: Samuel Kounev, Karlsruhe Institute of Technology (KIT), Germany, [email protected]: Kai Sachs, SAP AG, GermanySecretary: Seetharami R. Seelam, IBM TJ Watson Research Center, USARelease Manager: Qais Noorshams, KIT, GermanySteering committee: J. Nelson Amaral, University of Alberta, CanadaRema Hariharan, AMD, USAWilhelm Hasselbring, University of Kiel, GermanyAlexandru Iosup, TU Delft, NetherlandsLizzy John, University of Texas at Austin, USASamuel Kounev, KIT, GermanyKlaus Lange, HP, USAMeikel Poess, Oracle Corporation, USAKai Sachs, SAP AG, GermanySeetharami R. Seelam, IBM TJ Watson Research Center, USAPert Tůma, Charles University of Prague, Czech RepublicNewsletter Editor:Piotr Rygielski, KIT, Germany

SPEC RESEARCH WORKING GROUPS

Cloud Working GroupChair: Erich Nahum, IBM TJ Watson Research Center, USAVice-Chair: Alexandru Iosup, TU Delft, NetherlandsSecretary: Aleksandar Milenkoski, KIT, GermanyRelease Manager: Michal Papež, KIT, Germanyhttp://research.spec.org/working-groups/rg-cloud-working-group.html

Working Group: Benchmarking architectures for intrusion detection in virtualized environments

Members: Alberto Avritzer, Siemens Corporate Research, USAMarco Vieira, University of Coimbra, PortugalAleksandar Milenkoski, KIT, GermanySamuel Kounev, KIT, Germany

At the time of publishing, the officers have been not elected yet.

WELCOME TO THE SPEC RESEARCH GROUP NEWSLETTER

We are proud to launch the first edition of theSPECResearchGroup Newsletter. This regular pub-licationwill provide youwith information on latest de-velopments, news and announcements relevant tothe benchmarking and quantitative system evalua-tion communities. Our newsletter is part of our mis-sion to foster the exchange of knowledge and ex-periences between industry and academia in thefield of quantitative system evaluation and analysis.

This first issue includes among others a reportfrom the 2nd Research Group (RG) Face-to-FaceMeetingheldonApril 26th inBoston, a report on the3nd ACM/SPEC International Conference on Perfor-mance Engineering (ICPE 2012) co-sponsored bythe SPEC RG, a report on the current activities andlatest developments in the RG Cloud working groupas well as in the newly formed working group on in-trusion detection, abstracts of selected papers fromICPE 2012, extended abstracts of the first two dis-sertations towin theSPECDistinguishedDissertationAward, and finally brief descriptions of the first toolspublished in our new peer-reviewed tool repository.

We hope you will enjoy reading this issueand welcome and encourage your contributionsfor articles and suggestions for future coverage.

Samuel Kounev (KIT), Kai Sachs (SAP AG)

SPEC RESEARCH GROUP MISSION STATEMENT

The SPEC Research Group (RG) is an approvedgroup of the Standard Performance Evaluation Cor-poration (SPEC) with the mission to promote inno-vative research in the area of quantitative systemevaluation and analysis by serving as a platform forcollaborative research efforts fostering the interac-tion between industry and academia in the field.

The scopeof the group includes computer bench-marking, performance evaluation, and experimentalsystemanalysisconsideringbothclassicalperformancemetricssuchas response time, throughput,scalabilityandefficiency,aswell asothernon-functional systemproperties includedunder the termdependability,e.g.,availability, reliability, and security. The conducted re-search efforts span the design of metrics for systemevaluation aswell as the development ofmethodolo-gies,techniquesandtoolsformeasurement,loadtest-ing, profiling, workload characterization, dependabil-ity and efficiency evaluation of computing systems.

Samuel Kounev (KIT)

Page 3: SPEC Research Group Newsletter, volume 1 issue 1 ...research.spec.org/fileadmin/user_upload/newsletter/... · SPEC ResearchSM Group Newsletter Volume 1, Issue 1 September 2012 ...

3

SPEC ANNOUNCEMENTS

Server Efficiency Rating Tool (SERT)August 21, 2012

SERTisbeingdevelopedfortheUSEnvironmentalProtectionAgency (EPA) to be part of the next gen-eration of ENERGY STAR for Servers qualificationprocess.

TheSERThasbeendevelopedbytheSPECpowercommittee,comprisingrepresentativesofmanyofkeyserver and software companies, as well as veteranSPECbenchmarkdevelopersandsupportingcontribu-tors.TheSERThasdrawn fromexperiencegained indeveloping and supporting the SPECpower_ssj2008benchmark.

UnlikemostSPECproductstheSERTisspecificallydesignednot to be a comparative benchmark, in thatit does not provide a composite power/performancescore. Instead it is composed of multiple workloads,optimized to exercise CPU, memory and storage I/Osubsystems.Itproducesdetailedinformationrelatingtoeach subsystem’s respective power and performancecharacteristicsacrossmultipleloadlevels,rangingfromidleto100%usage.

AswellasworkingwiththeEPASPECisengagingwith other international regulatory bodies to encour-ageadoptionof theSERT.Creatingan internationallyavailabletestplatformisaimedatreducingthetestingburdenon theserver industry,andenablingmeaning-fulandconsistentcomparisonsbetweenproductssoldacross the world. For more information about SERT,pleaseseehttp://www.spec.org/sert/

Klaus Lange (SPECpower Committee Chair)

How Did This Get Published? Pitfalls in Ex-perimental Evaluation of Computing Systems

June 21, 2012

Nelson J. Amaral gave a talk during the LCTES2012 (Languages, Compilers, Tools and Theory forEmbedded Systems) Special Session: Benchmarkingand Performance Evaluation. The LCTES conferencewasheldinconjunctionwithPLDI2012(ProgrammingLanguageDesignandImplementation)inBeijing,June12–13.ThetalkwasverywellattendedandreceivedbymanyseniorpeopleinthePLDIcommunity.Theslidesfromthetalkareavailableonlinetodownload.

http://webdocs.cs.ualberta.ca/~amaral/Amaral-LCTES2012.pptx

Nelson J. Amaral (University of Alberta), Piotr Rygielski (KIT)

New OSG Cloud Group in SPECJune 16, 2012

The Standard Performance Evaluation Corp.(SPEC)hasformedanewgrouptoresearchandrec-ommendapplicationworkloadsforbenchmarkingcloudcomputingperformance.

Thegroup functionsunderSPEC’sOpenSystemsGroup(OSG)andisworkingincooperationwithotherSPECcommitteesandsubcommitteestodefinecloud

benchmark methodologies, determine and recom-mendapplicationworkloads, identify cloudmetrics forexisting SPEC benchmarks, and develop new cloudbenchmarks.CurrentparticipantsinOSGCloudincludeAMD, Dell, IBM, Ideas International, Intel, KarlsruheInstituteofTechnology,Oracle,RedHatandVMware.Long-time SPEC benchmark developer Yun Chao isa supporting contributor. The group collaborates withtheSPECResearchCloudgroup,whichisworkingongainingabroaderunderstandingofcloudbehaviorandperformanceissues.

WhilecloudperformancetakesintoaccountmanyofthesamecharacteristicsascurrentSPECbenchmarks–throughput,responsetimeandpower,forexample–it alsobrings intoplaynewmetrics suchaselasticity,definedashowquicklyaservicecanadapttochangingcustomerneeds.

“Cloud computing is on the rise and represents amajorshiftinhowserversareusedandhowtheirper-formanceismeasured,”saysRemaHariharan,chairofOSGCloud. “Wewant to assemble the bestminds todefine thisspace,createworkloads,augmentexistingSPEC benchmarks, and develop new cloud-basedbenchmarks.”

The group targets three types of users for theworkloadsandbenchmarksitwillcreate:Hardwareandsoftwarevendorsprovidingproductsthatenablecloudservices. Cloud providers – entities that offer cloudservices,suchasIaaS,PaaSorSaaS.Businesscus-tomerswhowouldusebenchmarkresultstohelpthemselectcloudproviders.

OSGCloudhasalreadydevelopeda50-pagereportthat details its objectives, benchmark considerations,characteristics of a cloud benchmark, and tools forcreatingmetrics.

“We have gotten off to a great start in creatingguidelines for benchmarks with clearly defined, stan-dardizedmetrics,butwewouldliketoseewiderpartici-pation,especiallyfromcloudprovidersandusers,”saysHariharan.

www.spec.org

SPEC Announces New BenchmarkJune 4, 2012

SPEC’s Application Performance Characterizationproject group announces the release ofSPECapc forPTCCreo®2.0.Thebenchmarkprovideseightwork-flows thatexerciseallaspectsofsystemperformancewhen running the popular application. Compositescores are generated for graphics, shaded graphics,wireframegraphics,CPUandI/Operformance.

www.spec.org

Over 100 Attendees at ICPE 2012May 7, 2012

Over 100 participants from industry and academiacoming from more than a dozen countries spreadacrossfourcontinentsattendedICPE2012inBoston,USA.The technicalprogram included19 full researchpapers,2shortpapers,2 industrial/experiencepapers

Page 4: SPEC Research Group Newsletter, volume 1 issue 1 ...research.spec.org/fileadmin/user_upload/newsletter/... · SPEC ResearchSM Group Newsletter Volume 1, Issue 1 September 2012 ...

4

and 10 work-in-progress papers selected from over80submissionsoverall tracks. Italso featured2key-note speakers of international repute, 9 poster/demopresentations, 3 tutorials and 2 invited talks from thewinnersoftheSPECDistinguishedDissertationAward.ThenextICPEconferencewilltakeplaceinApril2013inPrague,CzechRepublic.

www.spec.org

New Version of SPECjEnterprise2010April 5, 2012

SPECjEnterprise2010 v1.03 is now available.This update includes performance-neutral changesto address known issues. Beginning May 1, 2012,all result submissions must be made using ver-sion 1.03. Licensees of earlier versions of theSPECjEnterprise2010 software may request a freeupdateto1.03bycontactingSPEC.

www.spec.org

Research Group ElectionsMarch 23, 2012

Between 5th and 23rd of March 2012, the mem-bers of SPEC Research Group voted to elect theStreeringCommittee and officers ofSPECResearch.The following individuals have been elected to theSPEC Research Group Steering Committee: J. Nel-son Amaral (University of Alberta), Rema Hariharan(AMD), Willhelm Hasselbring (University of Kiel), Al-exandru Iosup (TU Delft), Lizy John (University ofTexas atAustin), Samuel Kounev (KIT), Klaus Lange(HP), Meikel Poess (Oracle), Kai Sachs (SAP AG),Seetharami Seelam (IBM TJ Watson Research Cen-ter), and Petr Tuma (Charles University Prague).

ThesteeringcommitteeelectedthefollowingRGOffi-cers:Chair,Vice-Chair,SecretaryandReleaseManager.Theresultsoftheelectionarethefollowing;Chair:Sam-uelKounev(KIT),Vice-Chair:KaiSachs(SAPAG),Sec-retary:SeetharamiSeelam(IBMTJWatsonResearchCenter), Release Manager: Qais Noorshams (KIT).

Piotr Rygielski (KIT)

Repository of Peer-Reviewed Tools LaunchedFebruary 2, 2012

A repositoryofpeer-reviewed tools forquantitativesystemevaluationandanalysisisnowavailableattheSPECRGwebsite.Asacontributiontothecommunity,SPECRGaddressestheneedforacollectionoftoolsthat have undergone a thorough review process bymultipleindependentexpertstoensurehighqualityandrelevance.Thereviewprocesscoversimportantqualityfactors,includingmaturity,availabilityandusability.

The repository is intended tohaveabroadscope,coveringsystemevaluationandanalysiswithrespecttobothclassicalperformancemetrics,suchas responsetime, throughput, scalability and efficiency as well asothernon-functionalsystemproperties includedunderthe termdependability,e.g.,availability, reliability,andsecurity.Inparticular,toolsformeasurement,profiling,workload characterization, load testing, stress testingandresiliencetestingaresolicited.

AmongthefirsttoolstopassthereviewprocessareFabanandKieker.Faban isa framework toautomateserver performance tests in multi-tier environments,already used by the SPECjEnterprise2010 andSPECsipInfrastructure2011benchmarks.The toolwasoriginallydevelopedatSunMicrosystemsandisdistrib-utedundertheCDDL1.0license.Kiekerisaframeworkfor monitoring and analysis of the performance ofsoftwaresystems.KiekerwasdevelopedbyaresearchteamfromtheUniversityofKielandisdistributedunderanApache2.0License.

More tools are currently under review and furthersubmissions are encouraged. After a careful reviewprocess, the tools are published on the SPEC RGwebsite. The tools are redistributed without modifica-tion and the submitter retains the associated rightsand responsibilities.Aspartof theacceptance,SPECRGmayasktheauthorstomakechangesorenhancecertainfeaturesoraspectsbutSPECRGitselfdoesnotmakeanychanges.

The members of SPEC RG review the submittedtools usingawide spectrumof criteria – startingwithcode-related requirements such as source structureand documentation, and ending with practical issuesincluding licensing and support. Thus, the tools areexpected to perform as advertised inmost configura-tions.Thegoalofthereviewprocessistodelivertoolsthatarerobustandresultinaproductiveexperienceforthosethatcommitresourcestousethem.

http://research.spec.org/tools.html

www.spec.org

Page 5: SPEC Research Group Newsletter, volume 1 issue 1 ...research.spec.org/fileadmin/user_upload/newsletter/... · SPEC ResearchSM Group Newsletter Volume 1, Issue 1 September 2012 ...

5

NEW SPEC RESEARCH WORKING GROUP: BENCHMARKING

ARCHITECTURES FOR INTRUSION DETECTION IN VIRTUALIZED

ENVIRONMENTSJuly 31, 2012

The cloud computing paradigm, enabled by virtu-alization technology, continuously gains on popular-ity since it bringsmany benefits such as pay-per-usebusinessmodelandon-demandresourceprovisioning.However,theon-goingwidemigrationtocloudsystemsischallengedbymanyconcernsamongwhichsecurityisamajorone.

AnIDS(intrusiondetectionsystem)isacommonse-curitymechanismwhichfeaturesdetectionofmaliciousevents while monitoring regular host and/or networkactivities.Therefore,manyacademicandindustrialor-ganizationsareconductingextensiveresearchonnovelIDSes specifically designed to operate in virtualizedcloudenvironments.AstheamountandthepopularityofsuchIDSesincrease,benchmarkingIDSesforcloudenvironments becomes imperative since it providesinsightanddeeperunderstandingoftheirbehaviorandperformance.

In order to contribute towards addressing the in-creasing demand for representative and rigorous IDSbenchmarks for cloud platforms, the SPECResearchGrouphasformedanewworkinggrouponbenchmark-ing architectures for intrusion detection in virtualizedenvironments.Thegoaloftheworkinggroupistofosterand facilitate innovative research through exchangeof ideas and experiences. Its membership body cur-rently includes representatives of Siemens CorporateResearch(USA),UniversityofCoimbra(Portugal)andKarlsruheInstituteofTechnology(Germany).Interestedpeople and organizations are strongly encouraged tojointhenewgroup.

Thegrouphasalreadyinitiatedactivitiessuchasin-depthanalysisoftheexistingIDSbenchmarkingefforts,identificationof relevantchallengesandrequirements,and development of benchmarking approaches.Onalong-termbasis, theworkinggroupaims todevelopacustomizable and flexible IDS benchmarking frame-work which employs representative cloud workloads,accuratemetrics and rigorous benchmarkingmethod-ology. The groupmay also widen its research scopein terms of designing and developing novel securitymechanisms applicable in cloud environments, suchas effective attack detection techniques and securitysystemarchitectures.Alberto Avritzer (Siemens Corporate Research), Marco Vieira (Univer-

sity of Coimbra), Aleksandar Milenkoski (KIT), Samuel Kounev (KIT).

FACE-TO-FACE MEETING OF SPEC RESEARCH GROUP

STEERING COMMITTEEApril 26, 2012

The SPEC Research Group (SPEC RG) held averyproductiveface-to-facemeetingduringtheBostonSPECmeetingonApril 26th, 2012.Themeetingwasattended by 23 members (19 in person and 4 viateleconference). The chair of the SPEC RG, SamuelKounevfromtheKarlsruheInstituteofTechnology(KIT)commencedthemeetingevaluatingthefirstyearoftheSPEC RG. 35 member organizations from academiaand industry joined the RGwithin the first year. FirstelectionswereheldandaSteeringCommittee includ-ingOfficerswasfinalized.

SinceitsinaugurationtheSPECRGhasbeenmeet-ingtwiceeverymonthandthefirstface-to-facemeetingof theSPECRGwasheld inKarlsruhe inconjunctionwithICPE2011.Thegroupfeltthatcoincidingface-to-facemeetingswith the ICPE conference is desirable.Thanks to the dedicated members of the group, thetechnicalinfrastructureoftheRGanditspublicwebsitewere setup and populated with relevant information.ThefirstworkinggroupoftheRG(RGCloud)focusingoncloudcomputingwaslaunchedmid2011.

The flagship conference ICPEwas established asa regulareventwitha12monthcycle.Withover120participantscomingfrom20differentcountriesand70organizationsworldwide, ICPE2011hasbeenagreatsuccess.Apeer-reviewedtoolrepositorywaslaunchedwith an acceptance procedure.Twoof the initial sub-missionspassedourrigorousreviewprocessandwerepublished.TheSPECDistinguishedDissertationAwardwas establishedwith some very high quality nomina-tions for 2011. The winners of the award presentedtheirworkatICPE2012.

At this meeting the Steering Committee evalu-ated the abovementioned accomplishments and selfcriticallydiscussedwaystostrivein2012.Onecriticismwasaroundthebiweeklymeetings.Tofocusthemeet-ingswewill include target amountsof specific topics,increase the time spent on technical issues to 2/3 ofthemeetingtimeandmakemeetingminutesavailableimmediatelyafterthemeeting.

As ICPE is now officially a joint endeavor by bothWOSPandSIPEW,theICPESteeringCommitteewillconsist of six people from SPEC RG and six peoplefrom WOSP. The SPEC RG Steering Committeediscussedhow thesixpeople fromSIPEWshouldbeelected. It was decided thatwewill hold official elec-tionsbyMay5th.

Alargeportionofthemeetingwastakenbydiscuss-ingfutureactivities.Oneofsuchactivitieswastobuilda repository of traces from real-life systems (perfor-mance,security,etc).AfterSamuelKounevintroducedtheideaalivelydiscussionstartedaboutlegalissuetosharesuchdata.Thisdiscussiondidnotcometoacon-clusionandwillbecontinuedduringsubsequentconfer-encecalls.Asecond topicwasa repository toensurereproducibilityofconferenceresults.Adiscussionabout

Page 6: SPEC Research Group Newsletter, volume 1 issue 1 ...research.spec.org/fileadmin/user_upload/newsletter/... · SPEC ResearchSM Group Newsletter Volume 1, Issue 1 September 2012 ...

6

SPEC DISTINGUISHED DISSERTATION AWARD WINNERS 2011

December 22, 2011

TheResearchGroupof theStandardPerformanceEvaluation Corp. (SPEC) has awarded its first twoDistinguishedDissertationAwardstoLukVanErtveldeof Ghent University in Belgium and Kai Sachs ofTechnischeUniversitätDarmstadtinGermany.

Luk Van Ertvelde was nominated by ProfessorLievenEeckhoutfromGhentUniversity.Hereceivestheaward for his dissertation titled “WorkloadGenerationforMicroprocessorPerformanceEvaluation.”

Kai Sachs was nominated by ProfessorAlejandroBuchmann from Technische Universität Darmstadt.He receives the award for his dissertation titled“Performance Modeling and Benchmarking of Event-BasedSystems.”

TheawardshavebeenpresentedattheICPE2012InternationalConferencewhichwilltakeplaceonApril22–25,2012inBoston,Mass.,USA.

The SPEC Distinguished Dissertation Award wasestablished in2011torecognizeoutstandingdisserta-tions within the scope of the SPEC Research Groupin terms of scientific originality, scientific significance,practicalrelevance,impact,andqualityofthepresentation.

“It was a demanding task to select thewinnersamongthehigh-qualitynominations,”says the chair of the selection committee,Prof.WilhelmHasselbringoftheUniversityofKiel inGermany. “Itwas great to see suchoutstandingworkforthefirstawardsprogramfromtheSPECResearchGroup.”

Members of the selection committee in-clude:J.NelsonAmaral,UniversityofAlberta,Canada; Wilhelm Hasselbring, Universityof Kiel, Germany (Chair); Seetharami R.Seelam,IBMWatsonResearchCenter,USA;Petr Tuma, Charles University of Prague,CzechRepublic;WalterBays,OracleCorp.,USA.

www.spec.org

Award Winner Dissertation: Workload Genera-tion for Microprocessor Performance Evaluation

December 22, 2011

Althoughtheavailabilityofstandardizedbenchmarksuites has streamlined the process of performanceevaluation,computerarchitectsandengineersstillfaceseveralimportantbenchmarkingchallenges.

Benchmarksshouldbe representativeof theappli-cationsthatareexpectedtorunonthetargetcomputersystem;however,itisnotalwayspossibletocomposeasetof representativebenchmarks.Themain reasonis that standardized benchmark suites are typicallyderivedfromopen-sourceprograms–becauseindustryhesitatestoshareproprietaryapplications–whichmaynotbe representativeof the real-worldapplicationsofinterest.

The amount of redundancy within and acrossbenchmarksshouldbeassmallaspossible.However,contemporary benchmark suites execute trillions ofinstructions tostressmicroprocessors inameaningfulway.Asaresult,itisinfeasibletosimulateentirebench-marksuitesusingdetailedcycle-accuratesimulators.

Benchmarks should enable micro-architecture,architecture and compiler research and development.Althoughexistingbenchmarksuitessatisfythisrequire-ment, this isoftennot thecase forbenchmark reduc-tion techniques because they typically operate at theassembly-level.

Inthisdissertation,Iproposethreenovelbenchmarkgenerationandreductiontechniquestoaddressthesechallenges.

Codemutation[1] isanovelmethodologythatmu-tates a proprietary application to complicate reverse-engineering so that it can be distributed as a bench-markamongindustryandacademia.Thesebenchmarkmutants hide the functional semantics of proprietaryapplications,whileexhibitingsimilarperformancechar-acteristics.Consequently,theycanbeusedasproxiesfor proprietary software to help drive performanceevaluationbythirdparties.

Codemutationconcealsthe intellectualpropertyofanapplication,butitdoesnot lend itself to thegen-eration of short-runningbenchmarks. Sampledsimulation on the otherhand reduces the simula-tion time of a benchmarkbyonlysimulatingasmallsample from a completebenchmark executionin a detailed manner. Insampled simulation, theperformancebottleneck isthe establishment of themicro-architectural state

(particularly the state of the caches) at the beginningofeachsamplingunit,oftenreferredtoasthecold-startproblem. I address this problem by proposing a newcachewarm-upmethodology,namelyNSL-BLRL[2,3],

thetopicresultedinPetrTůmatowriteaproposaltobecirculatedviae-mail.

SamuelKounev continued themeetingbypropos-ingnewworkinggroups.Possiblenewworkinggroupwould focusonSecurityBenchmarking,BigDataandDependability Benchmarking. Security BenchmarkingwillbepursuedbyKIT,SiemensandtheUniversityofCoimbra. Meikel will attend a workshop on Big Data(http://clds.ucsd.edu/wbdb2012/) and advertise theSPECRGasapotentialconsortiatodrivethedevelop-mentofabenchmark.MarcoVieiraoftheUniversityofCoimbra together withAlbertoAvritzer will work on aproposalonDependabilityBenchmarking.

Meikel Poess (Oracle Corporation)

SPEC President Walter Bays presents the award to Luk Van Ertvelde.

Page 7: SPEC Research Group Newsletter, volume 1 issue 1 ...research.spec.org/fileadmin/user_upload/newsletter/... · SPEC ResearchSM Group Newsletter Volume 1, Issue 1 September 2012 ...

7

Award Winner Dissertation: Performance Model-ing and Benchmarking of Event-Based Systems

December 22, 2011

Event-based systems (EBS) are increasingly usedasunderlyingtechnologyinmanymissioncriticalareasand large-scale environments, such as environmentalmonitoring and location-based services. Moreover,novelevent-basedapplicationsaretypicallyhighlydis-tributedanddataintensivewithstringentrequirementsfor performance and scalability. Common approachestoaddress these requirementsarebenchmarkingandperformancemodeling. However, therewas a lack ofgeneralperformancemodelingmethodologiesforEBSaswellas testharnessesandbenchmarksusing rep-resentative workloads for EBS. Therefore, this thesisfocusedonapproachestobenchmarkEBSaswellasthedevelopmentofaperformancemodelingmethodol-ogy.Inthiscontext,novelextensionsforqueueingPetrinets (QPNs) were proposed. The motivation was tosupportthedevelopmentandmaintenanceofEBSthatmeetcertainQuality-of-Service(QoS)requirements.

Toaddressthelackofrepresentativeworkloadswedeveloped the first industry standard benchmark for

whichreducessampledsimulationtimebyanorderofmagnitudecomparedtothestate-of-the-art.

Although code mutation can be used in combina-tionwithsampledsimulationtogenerateshort-runningworkloadsthatcanbedistributedtothirdpartieswithoutrevealingintellectualproperty,thelimitationisthatthisapproach operates at assembly-level. This excludesthem from being used for architecture and compilerresearch. We therefore propose a novel benchmarksynthesismethodologyandframework[4,5]thataimsat generating small but representative benchmarks ina high-level programming language, so that they canbeusedtoexploreboth thearchitectureandcompilerspaces.

[1] Luk Van Ertvelde and Lieven Eeckhout, “Dispersing Proprietary Applications as Bench-marks through Code Mutation”, in Proceedings of the International Conference on Architectural Support for Programming Languages and Op-erating Systems (ASPLOS), 2008, 201–210

[2] Luk Van Ertvelde, Filip Hellebaut, Lieven Eeckhout and Koen De Bosschere, “NSL-BLRL: Efficient Cache Warmup for Sampled Processor Simulation”, in Proceedings of the Annual Simu-lation Symposium (ANSS), 2006, 87–96

[3] Luk Van Ertvelde, Filip Hellebaut and Lieven Eeckhout, “Accurate and Efficient Cache Warmup for Sampled Processor Simula-tion through NSL-BLRL”, in The Computer Jour-nal, Vol. 51, No. 2, 192–206, March 2008

[4] Luk Van Ertvelde and Lieven Eeckhout, “Bench-mark Synthesis for Architecture and Compiler Exploration”, in Proceedings of the IEEE International Symposium on Workload Characterization (IISWC), 2010, 106–116

[5] Luk Van Ertvelde and Lieven Eeckhout, “Workload Reduction and Generation Techniques”, in IEEE Micro, Nov/Dec 2010, Vol.30, No.6

Luk Van Ertvelde (Ghent University, Belgium)

EBSjointlywith theStandardPerformanceEvaluationCorporation(SPEC)inwhosedevelopmentandspeci-ficationtheauthorwasinvolvedasachiefbenchmarkarchitectandleaddeveloper.OureffortsresultedintheSPECjms2007standardbenchmark.Itsmaincontribu-tionsweretwofold:basedonthefeedbackofindustrialpartners, we specified a comprehensive standardizedworkload with different scaling options and imple-mentedthebenchmarkusinganewlydevelopedcom-plexand flexible framework.Using theSPECjms2007benchmark we introduced a methodology for perfor-mance evaluation of message-oriented middlewareplatforms and showed how the workload can betailoredtoevaluateselectedperformanceaspects.The

standardized workloadcan be applied to otherEBS. For example, wedeveloped an innovativeresearch benchmark forpublish/subscribe-basedcommunication namedjms2009-PS basedon the SPECjms2007workload. The proposedbenchmarks are now thede facto standard bench-marks for evaluatingmessaging platforms and

havealreadybeenusedsuccessfullybyseveralindus-trial and researchorganizationsasabasis for furtherresearchonperformanceanalysisofEBS.

Todescribeworkloadpropertiesandroutingbehav-iorweintroducedanovelformaldefinitionofEBSandtheirperformanceaspects.Furthermore,weproposedan innovative approach to characterize the workloadand to model the performance aspects of EBS. Weused operational analysis techniques to describe thesystem traffic and derived an approximation for themean event delivery latency. We showed how moredetailedperformancemodelsbasedonQPNscouldbebuilt andused to providemoreaccurate performanceprediction.It isthefirstgeneralperformancemodelingmethodologyforEBSandcanbeusedforanin-depthperformance analysis as well as to identify potentialbottlenecks.A further contribution is a novel terminol-ogyforperformancemodelingpatternstargetingcom-monaspectsofevent-basedapplicationsusingQPNs.

To improve the modeling power of QPNs, wedefined several extensions of the standard QPNs.They allow us to buildmodels in amore flexible andgeneralwayandaddressseveral limitationsofQPNs.By introducing an additional level of abstraction, it ispossible to distinguish between logical and physicallayersinmodels.Thisenablestoflexiblymaplogicaltophysicalresourcesandthusmakesiteasytocustom-ize themodel to a specific deployment. Furthermore,weaddressed two limitingaspectsofstandardQPNs:constantcardinalitiesandlackoftransitionpriorities.

Finally,wevalidatedourmethodologytomodelEBSin two case studies and predicted system behaviorand performance under load successfully. As part ofthefirstcasestudyweextendedSIENA,awell-known

Kai Sachs (right) receives the award from Walter Bays.

Page 8: SPEC Research Group Newsletter, volume 1 issue 1 ...research.spec.org/fileadmin/user_upload/newsletter/... · SPEC ResearchSM Group Newsletter Volume 1, Issue 1 September 2012 ...

8

distributed EBS, with a runtime measurement frame-work and predicted the runtime behavior includingdelivery latency for a basic workload. In the secondcase study,wedevelopeda comprehensivemodel ofthe complete SPECjms2007 workload. To model theworkload we applied our performance modeling pat-ternsaswell as ourQPNextensions.Weconsideredanumberofdifferentscenarioswithvaryingworkloadintensity (up to4,500 transactions /30,000messagesper second) and compared the model predictionsagainstmeasurements.The results demonstrated theeffectivenessandpracticalityoftheproposedmodelingand prediction methodology in the context of a real-worldscenario.

This thesis opens up new avenues of frontierresearchintheareaofevent-basedsystems.Ourper-formancemodelingmethodologycanbeusedtobuildself-adaptive EBS using automatic model extractiontechniques. Such systems could dynamically adjusttheirconfigurationtoensurethatQoSrequirementsarecontinuouslymet.

Kai Sachs (TU Darmstadt, SAP AG)

CALL-FOR-NOMINATIONS: SPEC DISTINGUISHED DISSERTATION

AWARD 2012July 25, 2012

TheSPECDistinguishedDissertationAward(formal-ly called SPECBenchmarkingResearch PhDAward)aims to recognize outstanding doctoral dissertationswithinthescopeoftheSPECResearchGroupintermsof scientific originality, scientific significance, practicalrelevance,impact,andqualityofthepresentation.

The scopeof theSPECResearchGroup includescomputerbenchmarking,performanceevaluation,andexperimental system analysis in general, consideringboth classical performancemetrics such as responsetime, throughput, scalability and efficiency, aswell asothernon-functionalsystemproperties includedunderthe termdependability,e.g.,availability, reliability,andsecurity. Contributions of interest span the design ofmetrics for systemevaluationaswell as thedevelop-ment of methodologies, techniques and tools formeasurement, load testing, profiling, workload char-acterization,dependabilityandefficiencyevaluationofcomputingsystems.

The winner will receive $1000, which will beawardedattheICPE2013InternationalConference.Anominationshouldinclude:• A nomination letter (including the name of the

student,thetitleofthedissertation,theinstitutionwhere the dissertation was defended, and thedateofthedefense).

• A one page report that outlines the outstandingcontributionofthedissertation.

• The dissertation itself including a one page ex-tendedabstractofthedissertation.(Ifthedisser-tationiswritteninlanguageotherthanEnglish,itmaybeaccompaniedbypublications,inEnglish,describingthesameresearchasthedissertation.)

KIEKER: A FRAMEWORK FOR APPLICATION PERFORMANCE MONITORING AND DYNAMIC

SOFTWARE ANALYSISJuly 9, 2012

Kieker is a framework for monitoring and analyz-ing the runtime behavior of concurrent and distrib-utedsoftwaresystems.ItismaintainedbytheSoftwareEngineeringGroupat theUniversityofKiel,Germany.Kieker development started in 2006 as a small toolfor monitoring response times of Java software op-erations. Since then, it has evolved into a powerfulandextensible framework forapplicationperformancemanagement and dynamic software analysis, devel-opedandemployed forvariouspurposes in research,teaching, and industrial practice. Application areasincludeperformanceanddependencyanalysis,onlinecapacitymanagement,simulation,aswellasextractionof architecture-level performance and usage models.Being designed for continuous monitoring in produc-tionenvironments, it imposesonlya lowperformanceoverhead,asevaluatedexperimentallywithmicro-andmacro-benchmarks. Kieker has also been used fordynamic analysis of production systems in industrialcontexts.

The framework is structured into amonitoring andananalysiscomponent.Onthemonitoringside,moni-toring probes collect measurements represented asmonitoringrecords,whichamonitoringwriterpassestoaconfiguredmonitoringlogorstream.Ontheanalysisside,monitoring readers importmonitoring records ofinterestfromthemonitoringlog/streamandpassthemtoaconfigurablepipe-and-filterarchitectureofanalysisplugins.Focusingonapplication-levelanalysis,Kiekerincludes monitoring probes for collecting timing and

TheSPECDistinguishedDissertationAwardisopento dissertations that have been defended betweenOctober1,2011,andSeptember30,2012.Ifthereareseveral outstanding submissions, the committee maysplittheawardbetweenthem.ThesubmissiondeadlineisSeptember30,2012.

Nominations are welcome at any time before thefinal submission deadline. Nominations, or questionsabouttheapplicationprocess,shouldbesentbye-mailto: [email protected] Self-nominations are notaccepted.

Selection committee: Chair: Wilhelm Hasselbring,UniversityofKiel,Germany; DavidA.Bader,GeorgiaInstitute of Technology, USA; Walter Bays, OracleCorporation, USA; Walter Binder, University ofLugano, Switzerland; Rema Hariharan, AMD, USA;Samuel Kounev, Karlsruhe Institute of Technology(KIT),Germany;DiwakarKrishnamurthy,UniversityofCalgary,Canada;KlausLange,HP,USA;JohnMurphy,University College Dublin, Ireland; Marco Vieira,UniversityofCoimbra,Portugal.

Qais Noorshams (KIT)

Page 9: SPEC Research Group Newsletter, volume 1 issue 1 ...research.spec.org/fileadmin/user_upload/newsletter/... · SPEC ResearchSM Group Newsletter Volume 1, Issue 1 September 2012 ...

9

traceinformationfromdistributedexecutionsofsoftwareoperations.Moreover,probesforsamplingsystem-levelmeasures,e.g.,CPUutilizationandmemoryusage,areincluded.Inadditiontoinstrumentationandmonitoringsupport for Java, adapters for other platforms, suchas .NET, COM, and COBOL, have been developedrecently.Kiekersupportsmonitoring logsandstreamsutilizing file systems, databases, messaging queues,etc. A number of plugins for extracting, analyzing,and visualizing software architecturalmodels such ascalling-dependency graphs, call trees, and sequencediagramsareincluded.Utilizingtheframework’sexten-sibility,customcomponentscanbedevelopedeasily,ifrequired.

Kieker is open-source software, freely availableat http://kieker-monitoring.net. In 2011, Kieker wasreviewed and accepted for distribution as part of theSPEC Research Group’s repository of peer-reviewedtools for quantitative system evaluation and analysis(http://research.spec.org/projects/tools.html). At ICPE2012,wegaveaninvitedposter/demopresentationoftheframework.

http://dx.doi.org/10.1145/2188286.2188326

André van Hoorn (Christian-Albrechts-University of Kiel)

FABAN: A PLATFORM FOR PERFORMANCE TESTING OF SOFTWARE

June 21, 2012

Faban is a freeandopen sourceplatform for per-formance testing of software applications. Faban canbeusedinperformance,scalabilityandloadtestingofalmostanytypeofserverapplicationincludingcomplexmulti-tiered (web/cache/database type) applications.Atthesametime,Fabancanalsobeusedtodevelopamicro-benchmark that testsasingleserver suchasa ftp server. If the application accepts requests on anetwork,Fabancantestandmeasureit.

FabanprovidesanAPItodeveloptheworkloadsorbenchmarks(calledtheFabanDriverFramework)andaruntimeexecutionenvironment(knownastheFabanHarness). It automates thehandlingof client threads,timing,metricscollectionandproductionofreports.

FabanDriverFrameworkisanAPI-basedframeworkandcomponentmodeltohelpdevelopnewworkloadsrapidly. The driver framework controls the lifecycle ofthe benchmark run as well as the stochastic modelused tosimulateusers. Itprovidesbuilt-insupport formany servers (e.g., Apache httpd, MySQL, OracleRDBMS,memcached) automating their start/stopandstatscollection.Itprovidesawelldocumentedinterfacetoaddsupportforanyotherserver.

REPORT FROM THE 3RD ACM/SPEC INTERNATIONAL CONFERENCE ON

PERFORMANCE ENGINEERING (ICPE 2012)August 7, 2012

TheACM/SPECICPE2012tookplaceintheOmniParkerHouse,Boston from the22nd to 25th ofApril.Local general chair David Kaeli emphasized in hisopeningremarksthehistoricalmeaningofthehotel: itwastheworkplaceforwell-knownpersonssuchasHôChíMinhandMalcolmX.JohnF.Kennedyannouncedhis candidacy for the Congress in the press room,whichwasalsousedfortheconference.

This place provided the right spirit for the event.Morethan100participantsfromindustryandacademiacametogethertodiscussthelatestresearchresultsinthe field of performance engineering.The conferenceprogramcovereda variety of topics, includingbench-marking, energy efficiency, performance modeling,performance and software development processes,performance evaluation of cloud, data-intensive andadaptive systems.The technical program included19fullresearchpapers,2shortpapers,2industrial/experi-encepapersand10work-in-progresspapersselectedfromover80submissionsoveralltracks.Theprogramwasenrichedbysocialevents,aposter&demoexhibi-tion,andthepresentationof thewinnersof theSPECDistinguished Dissertation Award (see page 6). Thepresentations of the keynotes were two highlights oftheconference.In“AssuringtheTrustworthinessoftheSmarter Electric Grid“, Prof. William H. Sanders dis-cussed thechallenges inassuring the trustworthiness– performance, dependability, and security – of themergingsmartgrid.ThesecondkeynotewaspresentedbyDr.AmnonNaamadfromEMCanddiscussed“NewChallenges inPerformanceEngineering”with regardsto new technologies such as virtualization and cloud,their impacton the IT,and toolshelping toexploit thetechnologies.

ThePC-Chairshad thedifficult challenge toselectthe winners of the best paper awards. Best paperawardswerepresentedinthefollowingcategories:BestInnovation, Best Industry-Related and Best StudentPaper(formoredetails seepage10).

Kai Sachs (SAP AG)

Faban Harness is a tool to automate runningof workloads. Faban provides an easy to use webinterface to configure and queue runs, and includesfunctionality to view, compare graph run outputs andarchive results. Formore information, visit the Fabanwebsitehttp://faban.org.

Shanti Subramanyam

Page 10: SPEC Research Group Newsletter, volume 1 issue 1 ...research.spec.org/fileadmin/user_upload/newsletter/... · SPEC ResearchSM Group Newsletter Volume 1, Issue 1 September 2012 ...

10

ICPE 2012: BEST PAPER AWARDSJune 21, 2012

At the 2012 International Conference onPerformance Engineering (ICPE 2012) in Boston,USA, three outstanding paperswere distinguished bythe Program and Awards Chairs. The awards werepresentedinthreedifferentcategories:BestInnovation,Industry-Related,andStudentPaperAward.

TheBestInnovationPaperAwardwaspresentedto:GiulianoCasale andPeterHarrison (ImperialCollegeLondon)forthepaper“AClassofTractableModelsforRun-TimePerformanceEvaluation”.

The distinction – Best Industry-Related PaperAward–waspresented to:ThijmendeGooijer,AntonJansen(ABBCorporateResearch,Västerås,Sweden),HeikoKoziolek(ABBCorporateResearch,Ladenburg,Germany) and Anne Koziolek (University of Zurich,Switzerland)forthepaper“AnIndustrialCaseStudyofPerformanceandCostDesignSpaceExploration”.

The Best Student PaperAwardwas presented to:VlastimilBabka,PeterLibič,TomášMartinecandPetr

Automatedmodel building;Model validation and cali-brationtechniques.

In the category of ‘Performance measurement,testing and experimental analysis’ the topics include:Performancemeasurement,monitoring, andworkloadcharacterization techniques; Test planning, tools forperformance load testing, measurement, profilingand tuning; Methodologies for performance testingand functional testing; Reproducibility of performancestudies.

In the category of ‘Benchmarking, configuration,sizing, and capacity planning’ the topics include:Benchmark design and benchmarking methods, met-rics, and suites; Development of new, configurable,and/or scalable benchmarks; Use of benchmarks inindustry and academia; System configuration, sizingandcapacityplanningtechniques.

In the category of ‘System management/optimiza-tion’ the topics include: Use of models for run-timeapplication configuration/management; Online perfor-mance prediction and model parameter estimation;Adaptiveresourcemanagement.

Inthecategoryof‘PerformanceinCloud,virtualizedandmulti-core systems’ the topics include: Modeling,monitoring, and testing of cloud computing platformsandapplications;Performance/managementofvirtual-ized machines, storage and networks; Performanceengineeringofmulti-coreandparallelsystems.

In the category of ‘Performance and power’ thetopics include: Algorithms for combined power andperformance management; Instrumentation, profiling,modeling and measurement of power consumption;Power/performance engineering in grid/cluster/cloud/mobilecomputingsystems.

http://icpe2013.ipd.kit.edu/

Petr Tuma (Charles University of Prague)

ANNOUNCEMENT OF ICPE 2013June 17, 2012

The 4th ACM/SPEC International Conference onPerformance Engineering (ICPE) will take place onApril21–24,2013,inPrague,CzechRepublic.

ICPE is a joint meeting of the Association forComputing Machinery (ACM) and the StandardPerformance Evaluation Corporation (SPEC), whichprovides a forum for the integration of theory andpracticeinthefieldofperformanceengineering.ICPEbrings together researchers and industry practitionersthatshareexperiences,discusschallenges,andreportwork-in-progress and state-of-the-art results in perfor-manceengineeringofsoftwareandsystems.

Historically, ICPE has grown from the ACMWorkshop on Software and Performance (WOSP)and the SPEC International Performance EvaluationWorkshop (SIPEW). This unique connection to bothindustrial and academic background gives ICPEparticipants a singular advantage of connecting thetheoretical andpractical perspectivesonperformanceengineering–which is also reflected in the variety ofcontribution styles, including: Traditional researchpapers on both basic and applied research, coveringanyofthewiderangeoftopicsrelatedtoperformanceengineering, to be reviewed by the scientific programcommittee; Industrial andexperiencepapers coveringnovel applications and innovative implementationsof performance engineering, interesting performanceresultsandindustrialexperience,tobereviewedbytheindustrial program committee; Work-in-progress andvision papers provide space for reporting on promis-ing preliminary results that have not yet been fullyvalidated,emergingresearchchallengesandlong-termresearchdirections.

The conference proceedings will be published byACMandincludedintheACMDigitalLibrary.Authorsare also invited to submit auxiliary technical artifactsof their work (experimental data, evaluation scripts),thusincreasingthepotentialforfurtheruse,citationorexpansionoftheirresults.

Among the solicited paper topics one can findthe following. In the category of ‘Performance andSoftware Development Processes’ the topics include:Performance engineering for systems including, butnot limited to: smart grids, cloud platforms, storagesystems, sensor nets, embedded and real-time sys-tems,controlsystems,multi-tiersystems,event-drivensystems; Techniques to elicit and incorporate perfor-mance, availability, power and other extra-functionalrequirements throughout the software and system lifecycle; Agile performance-test-driven development;Performance-requirementsengineeringanddesign forperformance predictability; Software performance pat-ternsandanti-patterns.

In the category of ‘Performance modeling andprediction’thetopicsinclude:Languages,annotations,toolsandmethodologies tosupportmodel-basedper-formanceengineering;Analytical,simulation,statistical,AI-based, and hybrid modeling/prediction methods;

Page 11: SPEC Research Group Newsletter, volume 1 issue 1 ...research.spec.org/fileadmin/user_upload/newsletter/... · SPEC ResearchSM Group Newsletter Volume 1, Issue 1 September 2012 ...

11

ICPE 2012: ABSTRACTS OF SELECTED PAPERS

July 21, 2012

Giuliano Casale, Peter Harrison: A Class of Trac-table Models for Run-Time Performance Evaluation

Run-time resource allocation requires the avail-ability of system performance models that are bothaccurate and inexpensive to solve.We here proposea new methodology for run-time performance evalu-ation based on a class of closed queueing networks.Compared to exponential product-form models, theproposedqueueingnetworksalsosupporttheinclusionof resources having first-come first-served schedulingunder non-exponential service times. Motivated bythe lack of an exact solution for these networks, weproposeafixedpointalgorithmthatapproximatesper-formance indexes in linear timeand linearspacewithrespect to the number of requests considered in themodel.Numericalevaluationshows that,compared tosimulation, theproposedmodelssolvedby fixed-pointiteration have errors of about 1%–6%, while, on thesametestcases,exponentialproduct-formmodelssuf-fererrorseveninexcessof100%.Executiontimesoncommodityhardwareareoftheorderofafewsecondsorless,makingtheproposedmethodologypracticalforruntimedecision-making.

http://dx.doi.org/10.1145/2188286.2188299

Thijmen de Gooijer, Anton Jansen, Heiko Kozi-olek, Anne Koziolek: An Industrial Case Study of Performance and Cost Design Space Exploration

Determining the trade-off between performanceandcostsofadistributedsoftwaresystemisimportantas it enables fulfilling performance requirements in acost-efficientway.Thelargeamountofdesignalterna-tives forsuchsystemsoften leadssoftwarearchitectsto select a suboptimal solution, which may eitherwasteresourcesorcannotcopewithfutureworkloads.Recently,severalapproacheshaveappearedtoassistsoftwarearchitectswiththisdesigntask.Inthispaper,we present a case study applying one of these ap-proaches,i.e.PerOpteryx,toexplorethedesignspaceof an existing industrial distributed software systemfromABB.Tofacilitate thedesignexploration,wecre-ated a highly detailed performance and cost model,whichwas instrumental in determininga cost-efficientarchitecture solution using an evolutionary algorithm.Thecasestudydemonstratesthecapabilitiesofvarious

modern performance modeling tools and a designspaceexplorationtoolinanindustrialsetting,provideslessonslearned,andhelpsothersoftwarearchitectsinsolvingsimilarproblems.

http://dx.doi.org/10.1145/2188286.2188319

Vlastimil Babka, Peter Libič, Tomáš Martinec, Petr Tůma: On the Accu-Tůma: On the Accu-: On the Accu-

racy of Cache Sharing Models

Memory caches significantly improve the perfor-mance of workloads that have temporal and spatiallocality by providing faster access to data. Currentprocessordesignshavemultiplecoressharingacache.To accurately model a workload performance and toimprove system throughput by intelligently schedul-ing workloads on cores, we need to understand howsharing caches between workloads affects their dataaccesses.

Pastresearchhasdevelopedanalyticalmodelsthatestimate the cache behavior for combined workloadsgiven the stack distance profiles describing theseworkloads.Weextend this researchbypresentingananalytical model with contributions to accuracy andcomposability – our model makes fewer simplifyingassumptions than earlier models, and its output isin thesame formatas its input,which isan importantproperty for hierarchical composition during softwareperformancemodeling.

To compare the accuracy of our analytical modelwith earlier models, we attempted to reproduce thereportedaccuracyof thosemodels.Thisproved tobedifficult. We provide additional insight into the majorfactorsthatinfluenceanalyticalmodelaccuracy.

http://dx.doi.org/10.1145/2188286.2188294

Alexander Wert, Jens Happe, Dennis Wes-termann: Integrating Software Performance Curves with the Palladio Component Model

Software performance engineering for enterpriseapplications is becoming more and more challengingas the size and complexity of software landscapesincreases. Systems are built on powerful middlewareplatforms,existingsoftwarecomponents,and3rdpartyservices.Theinternalstructureofsuchasoftwareba-sisisoftenunknownespeciallyifbusinessandsystemboundaries are crossed. Existing model driven per-formance engineering approaches realize a pure topdownpredictionapproach.Softwarearchitectshavetoprovide a completemodel of their system in order toconduct performance analyses. Measurement-basedapproachesdependontheavailabilityofthecompletesystemundertest.Inthispaper,weproposeaconceptforthecombinationofmodel-drivenandmeasurement-basedperformanceengineering.Weintegratesoftware

Tůma(CharlesUniversity inPrague,CzechRepublic)for the paper “On the Accuracy of Cache SharingModels”.Congratulationstoawardwinners!

Piotr Rygielski (KIT)

Page 12: SPEC Research Group Newsletter, volume 1 issue 1 ...research.spec.org/fileadmin/user_upload/newsletter/... · SPEC ResearchSM Group Newsletter Volume 1, Issue 1 September 2012 ...

12

performance curves with the Palladio ComponentModel(PCM)(anadvancedmodelbasedperformancepredictionapproach) inorder toenable theevaluationof enterprise applications which depend on a largesoftwarebasis.

http://dx.doi.org/10.1145/2188286.2188339

Sadeka Islam, Kevin Lee, Alan Fekete, Anna Liu: How a Consumer Can Mea-

sure Elasticity for Cloud Platforms

Onemajor benefit claimed for cloud computing iselasticity: the cost to a consumer of computation cangrow or shrink with the workload. This paper offersimprovedwaystoquantifytheelasticityconcept,usingdataavailable to theconsumer.Wedefineameasurethatreflectsthefinancialpenaltytoaparticularconsum-er, from under-provisioning (leading to unacceptablelatencyorunmetdemand)orover-provisioning(payingmorethannecessaryfortheresourcesneededtosup-portaworkload).Wehaveappliedseveralworkloadstoapubliccloud;fromourexperimentsweextractinsightsinto the characteristics of a platform that influence itselasticity.We explore the impact of the rules used toincreaseordecreasecapacity.

http://dx.doi.org/10.1145/2188286.2188301

Rasha Tawhid, Dorina Petri: User-Friendly Ap-proach for Handling Performance Parameters dur-ing Predictive Software Performance Engineering

A Software Product Line (SPL) is a set of similarsoftwaresystemsthatshareacommonsetoffeatures.Insteadofbuildingeachproductfromscratch,SPLde-velopmenttakesadvantageofthereusabilityofthecoreassetssharedamongtheSPLmembers. Inthiswork,weintegrateperformanceanalysisintheearlyphasesofSPLdevelopmentprocess,applyingthesamereus-abilityconcepttotheperformanceannotations.InsteadofannotatingfromscratchtheUMLmodelofeveryde-rivedproduct,wepropose toannotate theSPLmodeloncewithgenericperformanceannotations.Afterderiv-ingthemodelofaproductfromthefamilymodelbyanautomatictransformation,thegenericperformancean-notationsneedtobeboundtoconcreteproduct-specificvalues provided by the developer. Dealing manuallywith a large number of performance annotations, by

asking the developer to inspect every diagram in thegeneratedmodelandtoextracttheseannotationsisanerror-proneprocess.Inthispaperweproposetoauto-mate thecollectionofallgenericparameters fromtheproductmodelandtopresentthemtothedeveloperinauser-friendlyformat(e.g.,aspreadsheetperdiagram,indicatingeachgenericparametertogetherwithguidinginformation that helps the user in providing concretebinding values).Thereare two kindsof generic para-metricannotationshandledbyourapproach:product-specific (corresponding to thesetof featuresselectedfor theproduct) andplatform-specific (suchasdevicechoices,networkconnections,middleware,andruntimeenvironment).The followingmodel transformations for(a) generating a product model with generic annota-tionsfromtheSPLmodel,(b)buildingthespreadsheetwith generic parameters and guiding information, and(c)performingtheactualbindingareallrealizedintheAtlasTransformationLanguage(ATL).

http://dx.doi.org/10.1145/2188286.2188304

Mike G. Tricker, Klaus-Dieter Lange, Jeremy A. Arnold, Hansfried Block, Christian Koopmann: The implementation of the server efficiency rating tool

The Server Efficiency Rating Tool (SERT) hasbeendevelopedbyStandardPerformanceEvaluationCorporation (SPEC) at the request of the USEnvironmentalProtectionAgency (EPA), prompted byconcerns that US data centers consumed almost 3%ofallenergyin2010.Sincethemajoritywasconsumedby servers and their associated heat dissipation sys-temstheEPAlaunchedtheENERGYSTARComputerServerprogram,focusingonprovidingprojectedpowerconsumption information to aid potential server usersandpurchasers.Thisprogramhasnowbeenextendedtoaworld-wideaudience.

Thispaperexpandsupontheonepublishedin2011,which described the initial design and early develop-ment phasesof theSERT.Since that publication, theSERT has continued to evolve and has entered thefirstBeta phase inOctober 2011with the goal of be-ingreleasedin2012.Thispaperdescribesmoreofthedetails of how the SERT is structured. This includeshow components interrelate, how the underlying sys-tem capabilities are discovered, and how the varioushardwaresubsystemsaremeasuredindividuallyusingdedicatedworklets.

http://dx.doi.org/10.1145/2188286.2188307

Page 13: SPEC Research Group Newsletter, volume 1 issue 1 ...research.spec.org/fileadmin/user_upload/newsletter/... · SPEC ResearchSM Group Newsletter Volume 1, Issue 1 September 2012 ...

The International Conference on Performance Engineering (ICPE) provides a forum for the integration of theory and practice in the field of performance engineering. ICPE has grown out of the ACM Workshop on Software Performance (WOSP) and the SPEC International Performance Engineering Workshop (SIPEW). It brings together researchers and industry practitioners to share ideas and present their experiences, discuss challenges, and report state-of-the-art and in-progress research on performance engineering of software and systems. Topics of interest include:

Research paper submissions: 24 Sep 2012 Research paper notification: 14 Dec 2012 Industrial/experience paper submissions: 30 Oct 2012

Poster and demo papers submissions: 13 Nov 2012 Tutorial proposals submissions: 10 Nov 2012 Work-in-progress/vision paper submissions 14 Jan 2013

General Chairs Petr Túma, Charles University, Czech Republic Giuliano Casale, Imperial College London, UK

Program Chairs J. Nelson Amaral, University of Alberta, Canada

Tony Field, Imperial College of London, UK Industrial Chair

Seetharami R. Seelam, IBM Research, USA

Tutorial Chair Mirco Tribastone, LMU Munich, Germany

Demos and Posters Chair Kaustubh Joshi, AT&T Labs Research, US

Publicity Chairs John Murphy, University College Dublin, Ireland

Kai Sachs, SAP AG, Germany

Finance Chair Lubomír Bulej, Charles University, Czech Republic

Publication Chair Evgenia Smirni, College of William and Mary, USA

Awards Chairs Lucy Cherkasova, HP Labs, USA

Vittorio Cortellessa, Universita’ di L’Aquila, Italy

Important Dates

Organizing Committee

Program Committee

Martin Arlitt, HP Labs, USA, University of Calgary, Canada Alberto Avritzer, Siemens Corporate Research, USA Steffen Becker, University of Paderborn, Germany Anne Benoit, ENS Lyon – LIP, France Steve Blackburn, ANU, Australia Andre Bondi, Siemens Corporate Research, USA Edson Borin, Universidade de Campinas, Brazil Jeremy Bradley, Imperial College London, UK Lydia Chen, IBM Zurich, Switzerland Lucy Cherkasova, Hewlett-Packard Laboratories, USA Lawrence Chung, University of Texas at Dallas, USA Susanna Donatelli, Universita' di Torino, Italy Sandhya Dwarkadas, University of Rochester, USA Dick Epema, Delft University of Technology, Netherlands Dror Feitelson, Hebrew University, Israel Sebastian Fischmeister, University of Waterloo, Canada

Stephen Gilmore, University of Edinburgh, UK Lars Grunske, TU Kaiserslautern, Germany Wilhelm Hasselbring, University of Kiel, Germany Michael Hind, IBM T.J. Watson Research Center, USA Robert Hundt, Google Inc., USA Alexandru Iosup, TU Delft, The Netherlands Stephen Jarvis, University of Warwick, UK Carlos Juiz, University of the Balearic Islands, Spain David Kaeli, Northeastern University, USA William Knottenbelt, Imperial College London, UK Samuel Kounev, Karlsruhe Institute of Technology, Germany Heiko Koziolek, ABB Corporate Research, Germany Anirban Mahanti, NICTA, Australia Pat Martin, Queen’s University, Canada Daniel Menasce, George Mason University, USA Raffaela Mirandola, Politecnico di Milano, Italy

David Pearce, Victoria University of Wellington, N. Zealand Dorina Petriu, Carleton University, Canada Meikel Poess, Oracle Corporation, USA Lawrence Rauchwerger, Texas A&M University, USA Ralf Reussner, Karlsruhe Institute of Technology, Germany George Riley, Georgia Institute of Technology, USA Alma Riska, College of William and Mary, USA Jerry Rolia, Carleton University, USA Peter Sweeney, IBM T.J. Watson Research Center, USA Mirco Tribastone, LMU Munich, Germany Catia Trubiani, Universita’ di L’Aquila, Italy Carey Williamson, University of Calgary, Canada Murray Woodside, Carleton University, Canada Peng Wu, IBM T. J. Watson Research Center, USA Xiaoyun Zhu, VMware, USA

Benchmarking, configuration, sizing, and capacity planning

• Benchmark design and benchmarking methods, metrics, and suites • Development of new, configurable, and/or scalable benchmarks • Use of benchmarks in industry and academia • System configuration, sizing and capacity planning techniques

Performance in Cloud, virtualized and multi-core systems

• Modeling, monitoring, and testing of cloud platforms and applications • Performance/management of virtualized machines, storage and networks • Performance engineering of multi-core and parallel systems

Performance and power

• Algorithms for combined power and performance management • Instrumentation, profiling, modeling and measurement of power

consumption • Power/performance engineering in grid/cluster/cloud/mobile computing

systems

Performance modeling and evaluation in other domains such as:

• Web-based systems, e-business, web services, SOAs • Transaction-oriented and event-based systems • Embedded and autonomous systems • Real-time and multimedia systems • Peer-to-peer, mobile and wireless systems

Multiple different kinds of papers are sought: basic and applied research, industrial/experience reports, and work-in-progress/vision papers. Different acceptance criteria apply for each category. The conference proceedings will be published by ACM and included in the ACM Digital Library. The best conference paper will receive a Best Paper Award.

Performance and software development processes

• Techniques to elicit and incorporate performance, availability, power and other extra-functional requirements throughout the software and system lifecycle

• Agile performance-test-driven development • Performance-requirements engineering and design for software

performance predictability • Software performance modeling, patterns and anti-patterns

Performance modeling and prediction

• Languages, annotations, tools and methodologies to support model-based performance engineering

• Analytical, simulation, statistical, AI-based, and hybrid modeling/prediction methods

• Automated model discovery and model building • Model validation and calibration techniques

Performance measurement and experimental analysis

• Performance measurement, monitoring, and workload characterization techniques

• Test planning, tools for performance, load testing, measurement, profiling and tuning

• Model extraction for functional or partly functional systems • Methodologies for performance testing and for functional testing • Reproduction and reproducibility of performance studies

System management/optimization

• Use of models for run-time configuration/management • Online performance prediction and model parameter estimation • Adaptive resource management