TestandValidationofWebServices -...

137
University Bordeaux 1 Doctoral school of Mathematics and Computer Science Thesis Test and Validation of Web Services presented by Ti ´ ên D ˜ ung Cao to receive the Doctoral Diploma of Computer Science December, 2010 Committee in charge: Charles Consel Professor - Institut Polytechnique de Bordeaux President Ana Cavalli Professor - Telecom SudParis Reviewer Manuel Núñez Professor - Universidad Complutense de Madrid Reviewer Richard Castanet Professor - Institut Polytechnique de Bordeaux Examiner Patrick Félix Assistant Professor - University Bordeaux 1 Examiner Fatiha Zaidi Assistant Professor - University Paris-Sud XI Examiner Supervisors: Prof. Richard Castanet - Assistant Prof. Patrick Félix Thesis n o : .........

Transcript of TestandValidationofWebServices -...

Page 1: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

University Bordeaux 1Doctoral school of Mathematics and Computer Science

Thesis

Test and Validation of Web Services

presented by

Tiên Dung Cao

to receive the Doctoral Diploma of Computer Science

December, 2010

Committee in charge:

Charles Consel Professor - Institut Polytechnique de Bordeaux PresidentAna Cavalli Professor - Telecom SudParis ReviewerManuel Núñez Professor - Universidad Complutense de Madrid ReviewerRichard Castanet Professor - Institut Polytechnique de Bordeaux ExaminerPatrick Félix Assistant Professor - University Bordeaux 1 ExaminerFatiha Zaidi Assistant Professor - University Paris-Sud XI Examiner

Supervisors: Prof. Richard Castanet - Assistant Prof. Patrick Félix

Thesis no: . . . . . . . . .

Page 2: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,
Page 3: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

AcknowledgmentsFirst and foremost, I would like to express my gratitude to my advisors: Mr. Richard Cas-tanet, professor of Institute Polytechnic of Bordeaux, and Mr. Patrick Félix, assistant pro-fessor of University Bordeaux 1, for their patience, support, and encouragement throughoutmy graduate studies. This dissertation would not have been possible without their invaluableadvice and guidance

I also want to thank: Mrs. Ana Cavalli, professor of Telecom SudParis, and Mr. ManuelNunez, professor of Universidad Complutense de Madrid (Spain), for their acceptance as thereviewers of my thesis. Mr. Charles Consel, professor of Institute Polytechnic of Bordeaux,and Mrs. Fatiha Zaidi, assistant professor of University Paris-Sud XI, for their acceptance asthe members of committee in charge.

I have had the great pleasure to work with my partners and friends in WebMov project. Iwould like to thank Nguyen Thi Kim Dung (PUF-HCM), Julien Borderie, Zouhair El Hilali,Guillaume Laborde, Guillaume Lameyre (University Bordeaux 1), who help me during de-velopment of prototype tools. I do not forget to thank my friends in LaBRI, my vietnamesefriends in Bordeaux, for their help during my living time in Bordeaux.

I wish to extend my deepest gratitude to my parents, for their years of hard work anddedicatation. Lastly, but most importantly, no words can express my gratitude to my wifefor her unrelenting support, understanding and love. And the last words are reserved to myson.

Page 4: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,
Page 5: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

AbstractIn this thesis, we propose the testing approaches for web service composition. We focus on

unit, integrated testing of an orchestration of web services and also the runtime verificationaspect. We defined an unit testing framework for an orchestration that is composed of a testarchitecture, a conformance relation and two proposed testing approaches based on TimedExtended Finite State Machine (TEFSM) model: offline which test activities as timed testcase generation, test execution and verdict assignment are applied in sequential, and onlinewhich test activities are applied in parallel. For integrated testing of an orchestration, wecombines of two approaches: active and passive. Firstly, active approach is used to start a newsession of the orchestration by sending a SOAP request. Then all communicating messagesamong services are collected and analyzed by a passive approach.

On the runtime verification aspect, we are interested in the correctness of an executiontrace with a set of defined constraints, called rules. We have proposed to extend the Nomadlanguage, by defining the constraints on each atomic action (fixed conditions) and a set ofdata correlations between the actions to define the rules for web services. This languageallows us to define a rule with future and past time, and to use the operations: NOT, AND,OR to combines some conditions into a context of the rule. Afterwards, we proposed analgorithm to check correctness of a message sequence in parallel with the trace collectionengine. Specifically, this algorithm verifies message by message without storing them.

Key-words: Web service orchestration, Timed Extended Finite State Machine, On-line/Offline testing, Active/Passive testing, Runtime verification, Test case generation.

Page 6: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

RésuméNous proposons dans cette thèse les approches de test pour la composition de services

web. Nous nous intéressons aux test unitaire et d’intégration d’une orchestration de servicesweb. L’aspect de vérification d’exécution en-ligne est aussi consideré. Nous définissons uneplateforme de test unitaire pour l’orchestration de services web qui compose une architecturede test, une relation de conformité et deux approches de test basés sur le modèle de machine àl’états finis étendues temporisés: l’approche offline où les activités de test comme la générationde cas de test temporisé, l’exécution de test et l’assignment de verdict sont appliquées enséquentielle tandis que ces activités sont appliquées en parallel dans l’approche online. Pourle test d’intégration d’une orchestration, nous combinons deux approches: active et passive.Au debut, l’approche active est utilisée pour activer une nouvelle session d’orchestrationpar l’envoi d’un message de requête SOAP. Après, tous les messages d’entré et de sortie del’orchestration sont collectés et analysés par l’approche passive.

Pour l’aspect de vérification d’exécution en-ligne, nous nous intéressons à la vérificationd’une trace qui respecte un ensemble des constraintes, noté règles, ou pas. Nous avons proposéextendre le langage Nomad en définissant des constraintes sur chaque action atomique et unensemble de corrélation de données entre les actions pour définir des règles pour le serviceweb. Ce langage nous permet de définir des règles avec le temps futur et passé, et d’utiliserdes opérations NOT, AND, OR pour combiner quelque conditions dans le contexte de la règle.Ensuite, nous proposons un algorithme pour vérifier l’exactitude d’une séquence des messagesen parallèle avec le moteur de collecte de trace.

Mots-clés: L’orchestration de services web, Machine à états finis étendues temporisés,Test en-ligne, Test actif/passif, Vérification d’exécution en-ligne, Génération de cas de testtemporisé.

Page 7: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

Contents

List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ixList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiList of Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii

1 Introduction 11.1 Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Web services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

1.2.1 Composite of web services . . . . . . . . . . . . . . . . . . . . . . . . . 31.2.2 Example of web services . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.3 Motivations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.3.1 Web services development . . . . . . . . . . . . . . . . . . . . . . . . . 61.3.2 The necessaries of test phases . . . . . . . . . . . . . . . . . . . . . . . 6

1.4 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.5 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101.6 Publications of thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2 Preliminaries 132.1 Web service standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

2.1.1 WSDL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142.1.2 SOAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142.1.3 UDDI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

2.2 BPEL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162.2.2 Based activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182.2.3 Structural Activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182.2.4 Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192.2.5 Event handlers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192.2.6 Fault handlers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

2.3 Software testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192.3.1 Conformance testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212.3.2 Test generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222.3.3 Passive testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242.3.4 Test tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

2.4 Modelling orchestration of web service . . . . . . . . . . . . . . . . . . . . . . 282.4.1 Automata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282.4.2 Petri Nets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292.4.3 Other formals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

v

Page 8: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

2.4.4 Discussions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302.5 Web services testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

2.5.1 Testing from WSDL-based . . . . . . . . . . . . . . . . . . . . . . . . . 302.5.2 Testing from BPEL specification . . . . . . . . . . . . . . . . . . . . . 322.5.3 Passive testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342.5.4 Other works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

2.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

3 Testing Approaches for Web Services 373.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373.2 Formal model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

3.2.1 Timed Extended Finite State Machine . . . . . . . . . . . . . . . . . . 393.3 Unit testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

3.3.1 Test architectures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433.3.2 Conformance relation . . . . . . . . . . . . . . . . . . . . . . . . . . . 443.3.3 Offline approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463.3.4 Online approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

3.4 Integrated testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623.4.1 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623.4.2 Checking algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

3.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

4 Runtime Verification 674.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 674.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

4.2.1 Test architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 694.2.2 The rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 704.2.3 Checking algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

4.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

5 Implementation 775.1 A case study: Product Retriever . . . . . . . . . . . . . . . . . . . . . . . . . 775.2 WSOTF tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

5.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 805.2.2 Experimentations and results . . . . . . . . . . . . . . . . . . . . . . . 83

5.3 RV4WS tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 855.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 855.3.2 Experimentations and results . . . . . . . . . . . . . . . . . . . . . . . 88

5.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

6 Conclusions and perspectives 936.1 Synthesis and results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 936.2 Perspectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

Bibliography 97

Page 9: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

A The tutorial of the tools 107A.1 WSOTF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

A.1.1 TEFSM designer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107A.2 RV4WS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110

A.2.1 Graphical interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110A.2.2 Proxy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113

A.3 TGSE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113A.4 BPELUnit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

B Specification of xLoan web service 118B.1 WSDL file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118B.2 BPEL code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

Index 123

Page 10: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,
Page 11: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

List of Figures

1.1 Web Service Standards Stack . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 General architecture of web services . . . . . . . . . . . . . . . . . . . . . . . 31.3 xLoan - use cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.4 xLoan - BPMN specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.5 WS development process and testing phases . . . . . . . . . . . . . . . . . . . 7

2.1 WSDL document structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152.2 SOAP message structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162.3 A BPEL structure of xLoan example . . . . . . . . . . . . . . . . . . . . . . . 172.4 Classification of test type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202.5 Examples of test cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

3.1 An example of TEFSM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423.2 An abstract model of SUT and its test architecture . . . . . . . . . . . . . . . 443.3 An example of xtioco relation of D-TEFSMs . . . . . . . . . . . . . . . . . . . 453.4 Offline testing approach for web services . . . . . . . . . . . . . . . . . . . . . 463.5 Test purpose example (t is a clock) . . . . . . . . . . . . . . . . . . . . . . . . 473.6 Examples of abstract timed test cases . . . . . . . . . . . . . . . . . . . . . . 483.7 Out reach time intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523.8 Synchronous product and abstract test case selection . . . . . . . . . . . . . . 533.9 A CSUT model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553.10 Data generation example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563.11 An example of test execution scenario . . . . . . . . . . . . . . . . . . . . . . 583.12 Online testing approach illustration . . . . . . . . . . . . . . . . . . . . . . . . 603.13 An overview of integrated testing approach . . . . . . . . . . . . . . . . . . . 63

4.1 Trace collection architecture for web services . . . . . . . . . . . . . . . . . . 69

5.1 ProductRetriever - BPMN specification . . . . . . . . . . . . . . . . . . . . . 795.2 Architecture of the WSOTF engine . . . . . . . . . . . . . . . . . . . . . . . . 805.3 Input format of WSOTF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 825.4 TEFSM of ProductRetriever . . . . . . . . . . . . . . . . . . . . . . . . . . . 845.5 Architecture of the RV4WS tool . . . . . . . . . . . . . . . . . . . . . . . . . 865.6 ParseData Interface of RV4WS . . . . . . . . . . . . . . . . . . . . . . . . . . 865.7 Rule format example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 875.8 Checking analysis of RV4WS tool . . . . . . . . . . . . . . . . . . . . . . . . . 885.9 Testbed architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 895.10 Checking analysis of Product Retriever . . . . . . . . . . . . . . . . . . . . . . 90

ix

Page 12: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

5.11 Trace collection of Product Retriever . . . . . . . . . . . . . . . . . . . . . . . 91

6.1 A choreography example and the part under test . . . . . . . . . . . . . . . . 96

A.1 The main GUI of TEFSM Designer . . . . . . . . . . . . . . . . . . . . . . . . 108A.2 The visualization of the test results by the tree . . . . . . . . . . . . . . . . . 109A.3 The graphic visualization of the test results . . . . . . . . . . . . . . . . . . . 109A.4 The main GUI of RV4WS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111A.5 Dialog to define a rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112A.6 Dialog to define the properties (data correlation) . . . . . . . . . . . . . . . . 112A.7 Illustration of BPELUnit’s test activities . . . . . . . . . . . . . . . . . . . . . 115A.8 The main GUI of BPELUnit . . . . . . . . . . . . . . . . . . . . . . . . . . . 116A.9 Edition of Send/Receive Synchronous of BPELUnit . . . . . . . . . . . . . . . 117A.10 Test case execution of BPELUnit . . . . . . . . . . . . . . . . . . . . . . . . . 117

Page 13: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

List of Tables

4.1 An example of runtime verification . . . . . . . . . . . . . . . . . . . . . . . . 76

5.1 Test results of ProductRetriever by WSOTF tool . . . . . . . . . . . . . . . . 85

xi

Page 14: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

List of Algorithms

1 Make the coverage tree from TEFSM . . . . . . . . . . . . . . . . . . . . . . . 502 Test execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573 Online testing algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 614 Get outgoing transitions from the current state . . . . . . . . . . . . . . . . . . 625 Checking algorithm from TEFSM . . . . . . . . . . . . . . . . . . . . . . . . . 65

6 Runtime verification algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 747 verify_future(rule, msg, t, result) . . . . . . . . . . . . . . . . . . . . . . . . . 758 verify_past(rule, msg, t, result) . . . . . . . . . . . . . . . . . . . . . . . . . . 75

xii

Page 15: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

Chapter 1

Introduction

Contents1.1 Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Web services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

1.2.1 Composite of web services . . . . . . . . . . . . . . . . . . . . . . . . 31.2.2 Example of web services . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.3 Motivations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.3.1 Web services development . . . . . . . . . . . . . . . . . . . . . . . . 61.3.2 The necessaries of test phases . . . . . . . . . . . . . . . . . . . . . . 6

1.4 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.5 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101.6 Publications of thesis . . . . . . . . . . . . . . . . . . . . . . . . . . 10

1.1 Context

This thesis is effectuated in the WebMov1 project that is supported by the French NationalAgency of Research within the part of Software Technology. This project is composed of twoindustrial partners (Softeam2 and Montimage3) and four academic partners (LaBRI, LRI-Orsay4, GET-INT5, Unicamp6 Brazil). The main objective of WebMov is to contribute tothe design, composition and validation of Web services through a high level of abstraction viewand a Service Oriented Architecture based on a logical architecture vision (i.e., UML profile),from which BPEL orchestration of service composition is generated. Next, this orchestrationspecification is modeled by the formal models based on a variant of Timed Extended FiniteState Machine [94]. From this model, many testing approaches were proposed for web servicecomposition. Concerning the test and validation research part of this project at LaBRI, ourworks were effectuated in the Langage Système et Réseaux (in English: Language, System

1Web Services Modeling and Validation http://webmov.lri.fr2http://www.softeam.fr/3http://www.montimage.com/4http://www.lri.fr/5http://www.it-sudparis.eu/6http://www.unicamp.br/

1

Page 16: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

2 Chapter 1. Introduction

and Network) team. This is a team which has extensive experience in the field of test withtwenties thesis were supported in Bordeaux.

1.2 Web services

Web services [5] are designed to overcome the challenge of automating business processinteractions. Following definition of W3C7, and specifically the group involved in the WebService Activity: Web service is: “a software application identified by a URI (i.e., UniformResource Identifier), whose interfaces and bindings are capable of being defined, described,and discovered as XML (i.e., Extensible Markup Language) artifacts. A Web service sup-ports direct interactions with other software agents using XML-based messages exchanged viaInternet-based protocols”.

Web services use XML format to wrap the data transmission. This format allows servicesthat are implemented on different platforms, for example: Microsoft .NET, Sun J2EE, runon Window or Linux, can communicate using a common format (i.e., XML format). XMLSchema provides the type system for XML documents as an approach to define the datastructure, and SOAP [146] (i.e., Simple Object Access Protocol) is a framework to furtherstandardize the structured type declaration for XMLmessages. An Internet protocol as HTTP(i.e., Hypertext Transfer Protocol) is usually used to exchange these XML-based messagesfrom/to another service. Web services themselves are described by using public standards.Each web service has to publish its invocation interface, e.g., network address, ports, functionsprovided, and the expected XML message format to invoke the service, using the WSDL[143] (i.e., Web Services Description Language) standard. To describe the behavior, it meanscontrol flows, data manipulation semantics and service qualities, we can use such languagesas BPEL [132], WS-CDL [144], OWL-S [145]. A stack of standardized protocols which wereused by Web services is shown in the figure 1.1.

Figure 1.1: Web Service Standards Stack

After providing a service, its specification and functionality description, in the file WSDL,are registered in a UDDI registry, which allows each web service to be discovered by otherservices. If a client or service consumer wants to use this service, it will find the file WSDLin the UDDI [118] registry, and can analyse it to get information about the service. After

7World Wide Web consortium http://www.w3c.org

Page 17: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

1.2. Web services 3

that, it will make a SOAP request message and send it to a port indicated by the WSDLfile. A SOAP response message will be returned on either the same port or not depending onthe support of service provider is synchronous or asynchronous. The synchronous services arecharacterized by the client invoking a service and then waiting for a response to the request onthe same port. Otherwise, with asynchronous services, the client invokes the service but doesnot (or cannot) wait for the response. Often, with these services, the client does not wantto wait for the response because it may take a significant amount of time for the service toprocess the request. The client can continue with some other processing rather than wait fora response. Later, when it does receive the response, it resumes whatever processing initiatedthe service request. We present here two patterns that are usually used to solve the problemof asynchronous services:

• One-way and notification operations (or callback): the request and the responseare two messages defined within separate operations. The request is modeled as aninbound one-way operation and the response is modeled as an outbound notificationoperation. Each message is sent as a separate transport-level transmission;

• Request/reply operations: In this pattern, a request and a response are two mes-sages defined within a single request/reply operation and sent as two separate andunrelated transport-level transmissions;

The general architecture of web service is shown in the figure 1.2.

Figure 1.2: General architecture of web services

1.2.1 Composite of web services

Web services are the elements based on a SOA application. A web service can be aSOA application. We can reuse, integrate other web services to have a new service. Thelatter is executed by interacting with other services, called partner services, to accomplishits workflow. We called this service is a composite of web services if we know the interactionbetween services of a composite. We need to distinguish two problems of a composite of webservices:

Page 18: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

4 Chapter 1. Introduction

• Orchestration: is the way that allows us to describe a new web service at messagelevel, including the order of message executing and logical behavior. An orchestrationexposes the internal logic of a single component by specifying the control flow structureand the data flow dependencies of that particular component. In an orchestration, thereis only one main process (named orchestrator) that controls its partner services usinga central architecture. BPEL Executable Process [132] is a standard language to definean orchestration specification. We can also use this language as the execution code of anorchestration by using an BPEL engine as Active-BPEL [4], Sun BPEL engine, Oracleetc.

• Choreography: describes a collaboration of services in a composite to accomplish oneor several objects. A choreography is aimed at exposing the whole flow of interactionamong all parties involved in the composite. In this context, there is no a main processas the orchestration, all participants are aware of the fact that they are taking part of thecomposition and, thus, of the way they should interact with each other. WS-CDL [144]and BPEL Abstract Process [132] are the languages to describe a choreography for acomposite of web services.

1.2.2 Example of web services

We present here a running example of web services: xLoan. It is an extension of Loanapproval example presented in the BPEL standard [132]. This service is built by using two webservice partners: assessment service and bank service. It receives a request message from theclient with: identification, name, income by month, amount, payment by month and monthnumber for payment. Using this information, the Loan service executes a simple processresulting in either a "loan accept" or "loan reject". If the amount requested is greater than$10000, a risk assessment service is used to obtain a quick evaluation of the risk associatedwith the customer. If this risk is "high", a "loan reject" will be returned to client. In a casewhere the amount requested is less then or equal $10000, or low risk assessment, the bankservice is used to approve the request of customer. Afterwards, loan assessment will be sent toclient. Finally, if loan assessment is "accept", this process will wait for one minute to receivea confirm message from the client and forward this message to the bank service. If not, acancel message will be sent to bank service as a notification. The figure 1.3 and 1.4 show theuse cases and the flow specification by BPMN language. Its WSDL specification is describedin the appendix B.1.

Figure 1.3: xLoan - use cases

Page 19: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

1.2. Web services 5

Figure 1.4: xLoan - BPMN specification

Page 20: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

6 Chapter 1. Introduction

1.3 Motivations

The Service Oriented Architecture (SOA) is an architectural paradigm which has gainedgreat popularity in recent years. The main idea of SOA is: reuse and integrate the services,which are loosely coupled and mostly distributed. The interface of a service is describedin an abstract interface language and can be invoked without knowledge of the underlyingimplementation. Services can be dynamically discovered and used. Nowadays, the SOA is thelatest approach to building, integrating and maintaining complex enterprise software systems.Web services are the applications-based to build these SOA applications. A web service can bebuilt as an SOA application by reusing and integrating the others web services. The QualityOf these Services (QoS) effects to the quality of application, and test is a method to assessthe QoS. We introduce in section 1.3.1 the web service development process based on SOAand the necessaries of test phases when we develop a web service in the section 1.3.2

1.3.1 Web services development

The web service (WS) development process based on SOA and the testing techniquesthat apply on each phase of the development are showed in the figure 1.5. Firstly, the WSspecification is written in a WS specification language as WSDL. A decision needs to be madein the first step to decompose a WS into multiple services8. If no decomposition is necessary,the WS will consist of only one service, otherwise, it will consist of multiple services. For eachservice, WS search will be performed (step 2) and the WS found will be tested if it allowsto be tested. However, there are some running services that do not allow us to test directly(active testing) because the test may effect its database, so some of the located WSs maybe tested after being composing into a composite by using passive testing technique. At thisphase, the unit testing is applied on WS-based or a composite of WS depends on the availableinformation from its specification. If there is no existing WS that satisfies our criterion, anew service will be developed (step 3). Of course, this service is tested before composing tothe new composite. After all partner services are ready, we compose them to have a newservice (step 4) and publish it (step 5). As said earlier, if these partners allow us to test, theintegrated testing technique is applied before publishing. Finally, passive testing is used onthe running service.

1.3.2 The necessaries of test phases

In the figure 1.5, we propose four phases to test a web service: unit testing is appliedto each partner service and to a new composite, integrated testing is an option phase that isused to test the interaction between the services on the real environment and finally, passivetesting is used for monitoring.

Unit testing: In a composite of web services, if the quality of one service is not good,that can effect other partners and the composite. For instance, a composite executes inparallel with some partners, if there is an error from any partners while the others have beenupdated. This traces the mistake of the partners if the rollback is not supported. On theother hand, if a composite does not conform to its specification, all its partner services may

8In the example of xLoan (section 1.2.2), this web service have been decomposed into two services: assess-ment and bank

Page 21: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

1.3. Motivations 7

Figure 1.5: WS development process and testing phases

Page 22: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

8 Chapter 1. Introduction

be effected. So, unit testing is necessary to find the bugs on each single partner service and onthe isolated composite without the interaction to its partners by simulating them. There aresome characteristics of unit testing. In the context of this thesis, we focus on the conformancetesting of composite of web services. Moreover, the time constraints must be considered in thetest approach because web services are the real-time applications. Time may be the decidingcondition for a composite either to continue the process or not.

Integrated testing: web services are real-time applications and their quality dependson the condition of the real environment. All partners and their composite behavior can becorrect but the quality of the back-end service is not guaranteed if the speed of the networkis not satisfied or the capacity of the server is limited etc. This explains why integratedtesting is necessary to check the interaction between services in the real environment beforepublishing to the back-end users.

Passive testing: is an approach that is applied on the running services to verify someproperties. Finding all faults of the services through testing is impossible, so supervising themafter publishing is necessary. On the other hand, we cannot apply the active testing methodto test some running systems, in many cases. For example, if we use the active method to testthe function create_new_account of a bank service, this will make a mistake on the databaseof the service. A composite of web services is a system that is integrated at runtime and itsresult depends on its partners or the real environment. Monitoring technique is sometimesused to monitor some properties of the services that have the high capability of fault. Butthe conclusions made about these properties are not given. Passive testing is a technique thatcan monitor some properties of the services and give the verdict (true/false) by collecting theexecution traces and analyzing them. Therefore, we are interested in using this approach tocontinue testing a web service after its publishing to the back-end users because this approachdoes not effect the running services. A method that can check real-time with the runningservice is very important because it allows us to immediately find the faults whenever theyhappen. Then, a solution can be proposed to solve these faults and avoid unnecessary damage.

1.4 Contributions

This manuscript presents some our contributions to test a web service orchestration wheresome following aspects of test as active, passive, unit, integrated are focused. Most of ourworks focuses on the TEFSM (Timed Extended Finite State Machine) model of Lallali etal [94].

Unit testing framework: Unit testing is used to find bugs on a single web service.With a web service orchestration, unit testing means that the main process is tested withoutinteraction with its real partners. We proposed in chapter §3 a framework for unit testing ofan orchestration implementation that conforms to its specification. We focus on a gray-boxtesting type where the interactions between the composite of web services and its partnersare available. This framework is composed of a test architecture, a conformance relation (i.e.,xtioco) and a testing approach that is itself composed of 5 steps: (i) modeling the compositeof web service by a TEFSM, (ii) generating all possible test purposes (iii) generating theabstract timed test case, (iv) deriving the concrete timed test case, (v) executing the timed

Page 23: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

1.4. Contributions 9

test case and producing the verdict.

Automatic timed test case generation: Based on the xtioco conformance relation,we proposed an automatic timed test case generation method (see section §3.3.3 of chap-ter §3). This generation is guided by a test purpose which describes some needing verificationproperties on implementation of service under test. Using the TEFSM model as a formalspecification, our method computes the synchronous product of the specification and a testpurpose. While calculating this product, other feasible paths are considered in an attemptto cut the unsatisfying path. Finally, the timed test case is generated by selecting a traceleading to Accept state on the product.

Online testing approach: Online testing is an approach that combines test generationand execution: only a single test primitive (input event or time delay) is generated from themodel at a time which is then immediately executed on the implementation under test (IUT).Then the output produced by the IUT, as well as its occurrence time, is checked against thespecification [113]. At that time, a new test primitive is produced, based on the values ofprevious events or random selection if there is some acceptable options and so forth until wearrive at a final state. Using this approach, the complete test scenario (test case suite anddata) is built during test execution. This approach does not use the test purposes. We appliedthe latter for unit testing of web service orchestration (see section §3.3.4 of chapter §3).

Integrated testing of web services: Integration testing is aimed at exercising the in-teraction among components and not just single units. We propose in chapter §3 (section §3.4)an integrated testing approach to test the interaction of an orchestration with its real part-ners. This approach includes two steps: firstly, an active approach is used to start a newsession of composite by sending a SOAP request to the main process. The communicatingmessages among components are then collected, including its occurrence time, to analyze andproduce the verdict. We also proposed an algorithm that verifies an observable trace to becorrect to a TEFSM specification. It means the passive approach is applied at this step.

Automated runtime verification: We propose in chapter §4 a new approach to verifya running system by applying a set of constraints. This approach collects the observabletraces of the system by installing a probe and analyzes it to produce a verdict. We extendedthe Nomad language to define a constraint, called rule, and proposed an algorithm to checkwhether an observable trace satisfies a set of rules or not. The rule is defined based on theorder of messages, interval time between them, including future (a then b) and past (b beforea) time. Specifically, our algorithm can check in parallel with the trace collection engine, sothe faults may be found immediately and, we can stop the system to avoid any damage.

Implementation and case study: We have developed two tools that support the com-plete testing of a web service composition. The first tool, named WSOTF (i.e., Web Services,Online Testing Framework) supports the complete active testing that is composed of timedtest case generation, test execution and verdict assignment. This tool also allows us to sim-ulate all partner services of an orchestration when we apply the unit testing technique. Thesecond tool, named RV4WS (i.e., Runtime Verification For Web Services) focuses on the prob-lem of passive testing, it means that the verification of some properties on a running service.These tools are going to introduce in the chapter §5.

Page 24: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

10 Chapter 1. Introduction

1.5 Outline

The structure of this thesis is organized as follows: the chapter 2 shows an overview ofthe state of the art. Chapters 3 and 4 are the main body of thesis that shows the results ofour research. Chapter 5 presents some implementations of these theories and experience ona case study. Conclusions and perspectives are discussed in chapter 6.

Chapter 2 presents the details of web service standards and the BPEL language that isused to compose some web services in order to have a new services. This is called a compositeof web services. This chapter also discusses the conformance testing methods and the existingtools for the software systems. Some formal models and the testing approach for web servicesare introduced in this chapter as the starting point of the thesis.

Chapter 3 introduces our testing approaches for web services. Firstly, we present a formalmodel TEFSM (Timed Extended Finite State Machine) that is used to model an orchestrationof web services. In this chapter, two approaches (offline and online) for unit testing and anapproach for integrated testing of a web service are introduced. For unit testing, we focus onconformance testing, so we defined a conformance relation, named xtioco (Extended TimedInput/Output Conformance), a test architecture, an algorithm that can generate all possibleabstract timed test cases and online testing algorithm that generates and executes in parallelthe timed test cases. For integrated testing, we propose a method that is composed of twoapproaches: active testing and passive testing. Firstly, active testing is used by generatingand sending a SOAP request to SUT to start a new session. Then passive testing is appliedby collecting the communicating messages among services and analyzing them.

A new passive testing (runtime verification) approach is introduced in chapter 4. Wediscuss how to extend the Nomad language, by adding the constraints on each message andadding data correlation to group the messages, to define the rules for passive testing of webservices. An algorithm that verifies message by message without storing them is proposed inthis chapter. A mechanism of trace collection is also discussed in this chapter.

Two implementation tools (WSOTF and RV4WS) of this thesis are presented in thechapter 5. WSOTF is a tool that is implemented based on the online testing approach forunit testing a web service or an orchestration of web services. RV4WS focuses on the aspectof verification of the running services. For each tool, we show an experience of a real-life casestudy of WebMov project, named Product Retriever.

Finally, chapter 6, the conclusions and perspectives of our thesis are discussed.

1.6 Publications of thesis

This thesis is the result of 2 years and 6 months of research spent at the LaBRI labora-tory within the WebMov Project, under the supervision of Professor Castanet and AssistantProfessor Felix. In the context of the WebMov Project, some international publications andtechnical reports have been produced. We shown below are five international publications un-der proceeding of conference with the reviews where: I am the main author of papers [1,3,4,5]

Page 25: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

1.6. Publications of thesis 11

and paper 2 is a collaboration work with the partners of the WebMov project.

1. Tien-Dung Cao, Trung-Tien Phan-Quang, Patrick Felix and Richard Castanet. Au-tomated Runtime Verification for Web Services. In the IEEE International Conferenceon Web Services, pages 76 - 82, Miami, Florida, USA, July 5-10, 2010. (Research Track,acceptance rate 17.6%)

2. Ana Cavalli, Tien-Dung Cao, Wissam Mallouli, Elilan Martins, Andrey Sadovykh,Sebastien Salva and Fatiha Zaidi. WebMov: An dedicated framework for the modelingand testing of Web services composition. In the IEEE International Conference on WebServices, pages 377 - 384, Miami, Florida, USA, July 5-10, 2010. (Applications andIndustry Track, acceptance rate 17.5%)

3. Tien-Dung Cao, Patrick Felix and Richard Castanet. WSOTF: An automatic testingtool for web services composition. In the Fifth International Conference on Internetand Web Applications and Services, pages 7 - 12, Barcelona, Spain, May 9 - 15, 2010.IEEE Computer Society (acceptance rate 31%)

4. Tien-Dung Cao, Patrick Felix, Richard Castanet and Ismail Berrada. Online Test-ing Framework for Web services. In third IEEE International Conference on SoftwareTesting, Verification and Validation, pages 363 - 372, Paris, France, April 6 - 9, 2010.(acceptance rate 26.5%)

5. Tien-Dung Cao, Patrick Felix, Richard Castanet and Ismail Berrada. Testing WebServices Composition Using the TGSE Tool, In SERVICES ’09: Proceedings of the 2009IEEE Congress on Services - I, pages 187 - 194, Los Angeles, CA, USA, July 6 -10,2009.

Page 26: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

12 Chapter 1. Introduction

Page 27: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

Chapter 2

Preliminaries

Contents2.1 Web service standards . . . . . . . . . . . . . . . . . . . . . . . . . 14

2.1.1 WSDL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142.1.2 SOAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142.1.3 UDDI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

2.2 BPEL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162.2.2 Based activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182.2.3 Structural Activities . . . . . . . . . . . . . . . . . . . . . . . . . . . 182.2.4 Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192.2.5 Event handlers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192.2.6 Fault handlers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

2.3 Software testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192.3.1 Conformance testing . . . . . . . . . . . . . . . . . . . . . . . . . . . 212.3.2 Test generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222.3.3 Passive testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242.3.4 Test tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

2.4 Modelling orchestration of web service . . . . . . . . . . . . . . . 282.4.1 Automata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282.4.2 Petri Nets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292.4.3 Other formals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302.4.4 Discussions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

2.5 Web services testing . . . . . . . . . . . . . . . . . . . . . . . . . . . 302.5.1 Testing from WSDL-based . . . . . . . . . . . . . . . . . . . . . . . . 302.5.2 Testing from BPEL specification . . . . . . . . . . . . . . . . . . . . 322.5.3 Passive testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342.5.4 Other works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

2.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

13

Page 28: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

14 Chapter 2. Preliminaries

We present in this chapter an overview of web service standards (section §2.1) and WS-BPEL or BPEL for short (i.e., Web Services Business Process Execution Language) that isan XML-based language designed to compose Web services in order to implement businessprocesses (section §2.2). The existing approaches of software testing as black-box testing,white-box testing, unit testing, integrated testing, conformance testing will be discussed insection §2.3. Some existing models of web service orchestration will be introduced in sec-tion §2.4. Section §2.5 will present the test approaches from different specifications of webservices. Finally, section §2.6 concludes this chapter.

2.1 Web service standardsIn section §1.2 of chapter §1, we show an overview of web services. Also some web servicestandards that are based on XML-based language are introduced. This section will discussabout these standards.

2.1.1 WSDL

WSDL [143] (i.e., Web Service Description Language) is an XML-based language fordescribing Web services and how to access them. A WSDL document defines a service as a setof network endpoints (or ports). In WSDL, the abstract definition of endpoints and messagesis separated from their concrete network deployment or data format bindings. This allowsthe reuse of abstract definitions: messages, which are abstract descriptions of the data beingexchanged, and port types which are abstract collections of operations. The concrete protocoland data format specifications for a particular port type constitutes a reusable binding. Aport is defined by associating a network address with a reusable binding, and a collection ofports define a service.

The figure 2.1 presents the structure of a WSDL document where:

• Types: data type definitions using XSD schema;

• Message: an abstract, typed definition of the data being communicated;

• Operation: an abstract description of a supported action;

• PortType: an abstract set of operations by one or more endpoints;

• Binding: a concrete protocol and data format specification for a particular porttype;

• Port: a single endpoint defined as a combination of a binding and a network address;

• Service: a collection of related endpoints.

2.1.2 SOAP

SOAP [146] is a transport protocol that provides the definition of the XML-based informa-tion which can be used for exchanging structured and typed information between applicationsin a decentralized, distributed environment using the network protocol as HTTP, SMTP etc.The figure 2.2 shows the structure of a SOAP message that contains two SOAP-specific sub-elements within the overall Envelope, namely a Header and a Body:

Page 29: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

2.1. Web service standards 15

Figure 2.1: WSDL document structure

• Header element: is a optional part that is used to declare the informations of authen-tification, session management.

• Body element: is the mandatory element which implies that this is where the mainend-to-end information conveyed in a SOAP message must be carried.

Within the body element, two types of message exchange are supported: Conversa-tional Message Exchanges and Remote Procedure Calls (RPC). A Conversational MessageExchanges can be modeled simply as XML-based content exchanged where the semantics areat the level of the sending and receiving applications. While a message with RPC type, thedata and its semantics, for example: the address of the target SOAP node, the procedure ormethod name, the identities and values of any arguments to be passed to the procedure, aremodeled at itseft.

2.1.3 UDDI

Universal Description, Discovery and Integration (UDDI) is a platform independent, Ex-tensible Markup Language (XML)-based registry for businesses worldwide to list themselveson the Internet. In the case of web services, it is composed of a set of web service descriptionfiles using WSDL language. The communication between the client and the web service isfirstly passed by a step of discovery and localization of service from this directory.

Page 30: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

16 Chapter 2. Preliminaries

Figure 2.2: SOAP message structure

2.2 BPEL

2.2.1 Introduction

Web Services Business Process Execution Language (WS-BPEL or BPEL) is a standardlanguage that allows us to define a composite of web services in two ways: Abstract or Ex-ecutable. A BPEL Abstract Process declares an interface of process behavior that is notintended to be executed and that must be explicitly declared as ’abstract’. Whereas Exe-cutable Processes are fully specified and thus can be executed by an BPEL engine. Here, weintroduce an overview of BPEL Executable Process (from here, called BPEL process) thatis usually used to define a specification (or execution) of an orchestration of web service.BPEL defines the syntax based on XML format to describe complex business processes thatcan interact synchronously or asynchronously with their partners. A BPEL process is con-sidered a composite of web services via a WSDL interface. This process always starts withthe process element (i.e. the root of the BPEL document). It is composed of the followingchildren: partnerLinks, variables, activities and the optional children: faultHandlers, even-tHandlers, correlationSets. These children are concurrent. Like any programming language,BPEL provides the based activities as basic commands to interact with its partner or internalinteraction, and the typical structural activities to control the message flow based on messagevalues or timing constraints. A correlationSet property is used in BPEL as a sessionId of thepartners upon the value of data variables. The figure 2.3 shows a BPEL structure of xLoanexample1. In this example, a correlation set is declared, named CS1, to correlate the clientidentification between two client requests (request function and confirm function).

1The full BPEL source is shown in the appendix B.2

Page 31: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

2.2. BPEL 17

Figure 2.3: A BPEL structure of xLoan example

Page 32: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

18 Chapter 2. Preliminaries

2.2.2 Based activities

The based activities are the basic command of BPEL language. We can classify theseactivities into two types: communicating activities (receive, reply, invoke):

• A business process provides services to its partners through inbound message activi-ties (<receive>, <onMessage> and <onEvent>) and corresponding <reply> activities.<onMessage> and <onEvent> are fasten in the structural activities <pick> and <even-tHandler> that will be introduce in the next section.

• The <invoke> activity is used to call an operation on service providers. This activitycan enclose other activities, inlined in compensation handler and fault handlers. Theoperation in this latter can be request-response or one-way operations, correspondingto WSDL operation definitions.

and internal activities (assign, exit, empty, throw, wait):

• The <assign> activity can be used to copy data from one variable to another.

• The <exit> activity is used to immediately end the business process instance.

• The <empty> activity that does nothing is used to provide a synchronization point towait other activities.

• The <throw> activity is used when a business process needs to signal an internal faultexplicitly.

• The <wait> activity specifies a delay for a certain period of time or until a certaindeadline is reached.

2.2.3 Structural Activities

The structural activities control the execution order of its sub-activities by a sequenceas <sequence>, <if>, <while> and <repeatUntil>, by a concurrence as <flow> or by thearrival of events as <pick>.

• <sequence> allows us to define an activity collection that are executed in sequential.

• <if> defines an ordered list of conditional choices that allow to chose only one activityto execute.

• <while> provides for repeated execution of sub-activities as long as the boolean condi-tion evaluates to true at the beginning of each iteration.

• <repeatUntil> provides for repeated execution of sub-activities until the given booleancondition becomes true.

• <pick> waits for the occurrence of exactly one <onMessage> event from a set of <on-Message> events, then executes the activity associated with that event. The <onMes-sage> is similar to a <receive> activity, in that it waits for the receipt of an inboundmessage. After an event has been selected, the other events are no longer accepted. If itdoes not receive an event after a duration in <for> or a specified deadline in <until>,an <onAlarm> event will be raised as a timeout and the activity associated with thisevent will be executed.

Page 33: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

2.3. Software testing 19

• <flow> provides concurrency or synchronization of a set of activities. The synchro-nization among activities are provided by means of links. Each link can have a sourceactivity and a target activity. Furthermore, a transition condition, which is a booleanexpression, is associated with each link and is evaluated (true or false) when the sourceactivity terminates. For each activity of a flow, a join condition may exist, which con-sists of incoming links of the activity combined by boolean operators. Only when allthe values of its incoming links are defined (true) and its join condition is evaluated astrue, an activity is enabled and can start. If the join condition evaluates to false, theactivity will not be executed.

2.2.4 Scope

BPEL provides the <scope> like the sub-function of any programming language. The<scope> activities can be nested hierarchically, while the "root" context is provided by the<process>. A <scope> includes also variables, partner links, message exchanges, correlationsets, event handlers, fault handlers, and specially a termination handler and a compensationhandler, which can not be attached to the <process>. In a <scope>, the errors may appearwhile we invoke a partner and this partner has been updated by our transaction. So a rollbackmechanism must be supported to restore the previous state of the partner before we wantto stop our <scope> or continue other works. A <compensationHandler> allows us to solvethis problem.

2.2.5 Event handlers

Each scope, including the process scope, can have a set of event handlers that are similarto a <pick> activity with the <onEvent> to wait for the receipt of an inbound message. Butthe latter can be run concurrently and invoked when the corresponding event occurs. Thechild activity within an <onEvent> or an <onAlarm> must be a <scope> activity.

2.2.6 Fault handlers

While a <process> or a <scope> is running, the faults may occur. In many cases, thesefaults need to be captured to process. Explicit fault handlers, if used, attached to a scopeprovide a way to define a set of custom fault-handling activities, defined by <catch> and<catchAll> constructs. Each <catch> construct is defined to intercept a specific kind offault, defined by a fault QName. A <catchAll> clause can be added to catch any fault notcaught by a more specific fault that are not defined explicitly.

2.3 Software testing

Testing is an important step to verify and assess the quality of applications. To test,there are several approaches. According to the characteristics of the application, the phases ofthe development, the available information of specification and the capability of applicationcontrol, An appropriate type of test can be applied for each concrete case. Following JanTretmans [135], we can categorize the types of test into four properties above and use aschema with four axes to show these types (Fig 2.4).

Page 34: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

20 Chapter 2. Preliminaries

Figure 2.4: Classification of test type

The characteristics:

• Conformance: this kind is firstly considered because it is used to test the conformanceof an implement under test with its specification.

• Robustness: is used to test the capability to deal with the unexpected data.

• Performance: can refer to the assessment of the performance of an application in thedifferent cases. It is usually used to determine the speed or effectiveness of an applica-tion.

• Security: is a process to determine that an information system protects data and main-tains functionality as intended. There are some security concepts that need to be coveredby security testing are:

– Authentication: the process of establishing the identity of the user.– Authorization: the process of determining that a requester is allowed to receive a

service or perform an operation.– Availability: assuring information and communications services will be ready for

use when expected or the information must be kept available to authorized personswhen they need it.

– Integrity: a measure intended to allow the receiver to determine that the informa-tion which it is providing is correct.

– etc.

Page 35: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

2.3. Software testing 21

• Reliability: to assess the good function of the different conditions, for example: timingconstraints, speed of network, etc.

The phases of the development:

• Unit testing: to verify the operation of only one component or one module in isolationto the rest of system.

• Integrated testing: to test the interactions between components or integrated system.It means that a system is verified at the interface level of each component.

• System testing: to verify the global behavior of the system.

The accessibility:

• Black-box: the tester does not know the internal structure of system. Only its specifi-cation is used to generate the test cases for functional testing.

• Gray-box: some informations of the internal structure are available for tester.

• White-box: the tester knows the internal structure of the system (i.e., the code) and itverifies the structure by testing different paths in the code.

The controllability:

• Active: the tester can interact directly with the system under test by sending therequests and receiving the results to analyze them.

• Passive: the tester assesses a system from input/output events or log file without inter-acting with the system under test.

2.3.1 Conformance testing

Conformance testing means to verify if the behavior of the component, the delay andthe exchanged messages, in a single system implementation corresponds to its specification.This is a kind of active testing and depending on the available informations, we can applyblack-box, gray-box or white box technique to conclude the conformance. There are threemain activities of a conformance testing method: test case generation, test execution andverdict assignment.

To give a verdict (pass, fail or inconclusive), we need a concept that defines "how is theconformance?". Tretmans [136] defined a conformance relation, called ioco, that is usuallyused for black-box testing of a system using the input/output events.

Definition 2.1. (ioco conformance relation): An implementation I conforms to a specifi-cation S by ioco conformance relation if for all possible evolution of S, the outputs of theimplementation I may perform after a given input must be a subset of those for the specifica-tion.

Page 36: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

22 Chapter 2. Preliminaries

This ioco conformance relation allows us to know that an implementation conforms to itsspecification using only input/output event of implementation. For a timed system, ioco isnot enough to say that I conforms to S because it does not define when these events happen.An extension of ioco, called tioco, is defined by M. Krichen and S. Tripakis [92]. In thelatter, the timing constraints have been considered. It says that a system may perform agiven action but also record a necessary duration that the system need to do. If output ofI is slower than output of S, then I does not conform to S. That means the time delays areincluded in the set of observable outputs.

A test represents a sequence of one or several inputs applied to an implementation undertest. Once an output is received for a duration, we check whether it belongs to the set ofexpected outputs or not. In the latter case, a fail verdict is produced. In the former case,either a pass verdict is emitted or the testing process continues by applying another input.Each sequence of test likes that is a test case. If the time constraints are indicated in thesetest cases, we call them a timed test case. In this case, we want to interrupt the testingprocess while we do not yet find a pass or a fail verdict, an inconclusive verdict can beproduced. In the other cases, an inconclusive verdict is also produced if we apply a test casefrom a test purpose. In [117], a test case is defined as following:

Definition 2.2. (Test case): A test case is a tuple T = (S, I,O, Tr, s0, SI , SO, SF , SP , C)where S is the set of states, I and O are disjoint sets of input and output actions, respectively,Tr ⊆ S × I ∪ O × S is the transition relation, s0 ∈ S is the initial state,and the sets SI ,SO, SF , SP ⊆ S are a partition of S. The transition relation and the sets of states fulfill thefollowing conditions:

• SI is a set of input states, s0 ∈ SI and ∀s ∈ SI , ∃!t = (s a→ s′) ∈ Tr such that a ∈ Iand s′ ∈ SO.

• SO is a set of output states, and ∀s ∈ SO, ∀o ∈ O,∃!s′ such that s o→ s′ ∈ Tr ands′ /∈ SO. Moreover, 6 ∃i ∈ I and 6 ∃s′ ∈ S such that s i→ s′ ∈ Tr.

• SF , SP are the set of fail and pass states that are terminal. Besides, ∀s ∈ SF ∪ SP ,6 ∃a ∈ I ∪O and 6 ∃s′ ∈ S such that s a→ s′ ∈ Tr.

• C: SP → Time is a function associating time stamps with passing states.

In Figure 2.5, we present some examples of test cases (time stamps are admitted at passstate) where ii represents the inputs (resp oi are the outputs).

2.3.2 Test generation

We have discussed two problems of test that are test execution and verdict assignment.The rest of the problem is how to test case generation? manually or automatically? This is thebiggest problem of automatic testing. Many methods have been proposed to automaticallygenerate the test case (or timed test case). Earlier methods such as TT, W-method, UIO...are based on the generation of paths in automata modelling the interactive system to betested. In this section, we introduce the idea of some methods. Some test tools that use thesemethods or other methods will be introduce in section 2.3.4. Most of these methods are basedon the generation (global or partial) of the reachability tree of all the behaviours.

Page 37: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

2.3. Software testing 23

Figure 2.5: Examples of test cases

Hit-or-Jump [48] is an algorithm to generate the test case for an embedded system thatcan be modeled by communicating extended finite state machines. It is a generalization andunification of both the search technique and random walks. The idea of the latter is: at anymoment, a local search (depth first or breadth first) from the current state in the neibourhoodof the reachability of the graph is executed until find a transition (noted t) that does not yetconsider or reach a search depth (or space) limit (Hit step). In the latter case, the Jumpstep will be carry out by randomly selecting a leaf node from the tree, that is generated bylocal search, and a new local search is started from this node. In the former case, the currentstate will be updated by target state of t and a new local search continues. The algorithmterminates when all transitions are considered at least once. A modification of this strategyis used by Lallali et al [96] to generate a timed test case from an IF (Intermediate Format)model using a timed test purpose.

Model checking is a technique of formal verification that allows us to automatically analyzea complex system and check if it satisfies certain properties or not. In model checking, theproperties that are used to verify the system, are usually specified by the Linear TemporalLogic(LTL) formula or other logics. The model checker reaches all the possible states in themodel and checks whether the properties hold. On the other hand, if the properties do nothold, a trace of the steps illustrating the violation of the property is given, which is called acounterexample. The works in [33, 67, 75, 83, 157] are used in this technique to generate atest case from different model checker tools such as SPIN [81], NuSMV [1], Uppaal [18]. Inthese approaches, test purposes are used as the properties to verify a system by the syntax:"the transition X is never executed".

Page 38: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

24 Chapter 2. Preliminaries

Building a coverage tree from a formal model of Extended Finite State Machine (EFSM)that can cover all possible paths, all states and all transitions through an EFSM to generatea set of test cases based on symbolic transformations makes it possible to exploit a particularanalysis technique using breadth or depth first searches, called Symbolic Execution Tree(SET), is proposed by Kim et al [91]. This approach is also used in [23, 68] to generate aset of test cases from an Input/Output Symbolic Transition System (IOSTS). But the testpurposes are defined as some particular subtrees of this symbolic execution tree and the lengthof tree is used to cover test cases.

A test sequence is generated by simply applying random inputs constructing a randomwalk over the graph representing a formal model until the stop condition is satisfied. This isthe simplest method for generating test sequence. It is extended by executing immediatelya system under test and verifying the correct output. The next input will be composedof the random walk and the value of the previous input/output. It is called the onlineapproach [19, 32, 60, 97].

The biggest problem in the automatic test generation and model checking is the stateexplosion of the reachability tree. To limit this explosion, some methods that are based on-the-fly technique have been proposed. Castanet et al [44] proposed a method to generatethe test case by using test purpose. Firstly, the authors define a set of rules to computethe synchronous product between the model specification and the test purpose. Next, allclocks are synchronized to one clock and an interval intersect between them is computed foreach transition. Finally, for each visible trace of the product which there is no the transitionswhere its interval time intersect is empty, a pass verdict will be assigned if it leading to Acceptstates. Else, an inconclusive verdict is assigned. The rest cases will be assigned a fail verdict.Jard and Jéron [87] also used this technique, but test cases are directly selected on the visiblebehaviors of the product (it performs a selection of traces leading to Accept states) becausein the formal model of [87], there are no constraints on the clocks.

2.3.3 Passive testing

Passive testing is a process to collect a sequence of message from the system under testand analyse it to give the verdict. In passive testing the tester does not need to interact withthe system under test. Passive testing is usually used as a monitoring technique to detectand report errors when we can not use an active testing method because it may be cause anegative effect in the system, for instance, a mistake on database. Another area of applicationis in network management to detect configuration problems, fault identification, or resourceprovisioning. This section will discuss some passive testing approaches.

Bayse et al [17] and Cavalli et al [47] proposed a passive testing approach based oninvariants of a Finite State Machine (FSM). For a FSM M = (S, sin, I,O, T ) where S is aset of finite states, sin is an initial state, I is the set of input actions, O is the set of outputactions and T is the set of transitions. The authors define two types of invariants:

• Simple invariant: a trace such as i1/o1, i1/o1, ..., in−1/on−1, in/O is a simple invariantof M if each time that the trace i1/o1, i1/o1, ..., in−1/on−1 is observed if we obtain theinput in then we necessarily get an output belonging to O,where O ⊆ O.

Page 39: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

2.3. Software testing 25

• Obligation invariant: to express properties such as "if y happens then we must havethat x had happened before".

Next, the authors present two algorithms to check from left-to-right and right-to-left a finitetrace to give a verdict. This approach does not consider the time constraints on the traces.TIPS [49](Test Invariant of Protocols and Services) is an implementation tool of this approach.

An extention of simple invariant with time constraints, called Timed Invariant whichallows us to express temporal properties is introduced in the works of C. Andrés et al [8, 9, 10].There are some limitations of the Timed Invariant model:

• Supports only the future time, the past time was not admit. Its semantic is: if apair input/output event (the interval time between an input and an output is alsoconsidered) or some event pairs have been happened and we continue obtain an input,then an output must be happened after a duration;

• Not support the operations as NOT, AND, OR to combine some conditions to a TimedInvariant;

• Not consider the constraints on the content of each event, so the data correlation problembetween the events is also not considered;

• Finally, the tool PasTe [9] that is implemented to check the correctness of a log withrespect to a set of time invariant does not allow us to verify an execution trace inparallel with the trace collection engine (it means the runtime verification or the onlinechecking).

Mallouli et al [108] proposed the security rule using the Nomad language to express theconstraints on the traces with obligations, prohibitions and permissions. A prohibition ora permission rule is granted and it applies immediately on the trace, an obligation rule needsa deadline and the works are not completed before this deadline. This approach solves thetime constraints of invariant approach. An algorithm to check the correction of the tracefollowing these security rules is introduced. This approach does not consider the correlationof messages by its data values, an important problem of passive testing.

M. Tabourier and A. Cavalli [133] proposed an approach to verify the traces actuallybelong to the accepted specification that is provided by a finite state machine. This methodis composed of two stages:

• Firstly, passive homing sequence is applied to determine the current state. Initially, allstates are the candidates. When an input/output arrives, these current states will beupdated by the destination state of corresponding transition if it is the source state oftransition. If not, it is removed from the candidate list. After a number of iterations,either a single current state is obtained and we move to the second step to detect thefault or an input/output pair is not accepted by any candidate state. In the latter case,a fault has been detected.

Page 40: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

26 Chapter 2. Preliminaries

• Secondly, fault detection is used by applying the search technique from the currentstate and the current input/output pair. If a state which does not accept the followingtransition is reached, there is an error. If not, the end of the trace is reached, no errorwas detected.

In the case where the trace is collected from the execution of multi-sessions that run inparallel, we can not used this approach. Moreover, this method does not consider the timeconstraints on the traces.

2.3.4 Test tools

A number of tools have been developed for test including test case derivation, test exe-cution and verification. Reviewing some of them in this thesis is our practical motivation todevelop the new tools for web service testing.

TGSE

TGSE [27] (Test Generation, Simulation and Emulation) is a toolkit developed by LaBRIwithin the RNRT Averroes project and the European project Marie Curie RTN TAROT(MCRTN 505121). It implements a generic test generation algorithm to generate the testcases by simulating a Communicating Systems (CS). A CS declares a set of shared resources(parameters and variables), a set of automata, a set of rules describing the different possiblesynchronizations between the entities, and a test purpose modeled by an automaton. Thistool supports the passive and active testing (with test purpose) of one or several componentswith data and temporal constraints. For active testing, test cases generation is based onsimulation where the exploration is guided by test purposes. For passive testing, the TGSEtool is used to check that a trace of an implementation is either a valid execution or not. Thistrace is also modeled as an ETIOA (Extended Timed Input Output Automata) and it is acomponent of the CS. This tool will be used in our approach to generate a test case for webservices by using the test purposes (section §3.3.3, chapter §3).

TGV

TGV [37, 87] (Test Generation with Verification technology) is a tool that was developedby Irisa Rennes and Verimag Grenoble with the support of the Vasy team of Inria Rhones-Alpes, integrated into the toolset CADP, for the generation of test cases based on a system’sspecification and a test purpose. This tool implements the ioco implementation relation [136].From the specification and the test purpose, a synchronous product is computed, in whichthe states are marked as Accept and Refuse using the information of the test purpose. Next,test cases are generated by selecting accepted on the visible behaviors, i.e., it performs aselection of traces leading to Accept states. The Pass verdicts are based on traces that reach"Accept". Traces not leading to an "Accept" state are truncated and the Inconclusive verdictis added. Finally, the Fail verdict is generated from observations not explicitly presented inall test cases corresponding to the test purpose. This tool does not generate the timed testcases because the time constraints do not consider in the input model and the test purposes.

Page 41: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

2.3. Software testing 27

TestGen_IF

TestGen_IF [49, 93] tool is based on active testing technique to generate a timed test casefrom a timed automata model IF [142]. This tool implements an automated test generationalgorithm based on a Hit-or-Jump exploration strategy [48] and is guided by a set of timedtest purposes. The timed test case that defined in this tool is a timed observable trace whichcan valid the requirements of the test purposes including time constraints. At any moment itconducts a local search from the current state in a neighborhood of the reach ability graph. Ifa state is reached and one or more test purposes are satisfied (a Hit), the set of test purposesis updated and a new partial search is conducted from this state. Otherwise, a partial searchis performed from a random graph leaf(a Jump). In this tool, the Hit-or-Jump explorationstrategy will stop when all test purposes are satisfied or reached a deadlock (no transition toexplore).

TorX

TorX [32, 134] is a prototype testing tool based on on-the-fly (or online) approach using theioco [136] or tioco quiescence conformance relation depending on the input format. Thetioco quiescence admits a time delay at each state, but there is no fixed a concrete duration.It was developed at Formal Methods and Tool Research group at University of Twente in theNetherlands, in collaboration with Eindhoven University of Technology, Philips ResearchLaboratories and Lucent Technologies R&D center Twente. This tool implements an onlinetesting algorithm that is a loop of two steps: execute an input to SUT, wait for an outputfrom SUT for a duration and verify it. After each input/output action, the next action iscomputed. A pass verdict is given if the loop reaches a stop condition. If not, a fail verdictis produced. The basic formal model of TorX is Labelled Transition System (LTS). But italso allows us to use some inputs format as: LOTOS, Promela, Aldebaran, FSP (Finite StateProcesses). JTorX [19] is a version of TorX ,which is wrote in Java, that allows us to easily use,install and configure because it can be done via the JTorX Graphical User Interface(GUI).We do not use this tool to test a web services because:

• All these formats do not allow us to model all properties of a specification of web servicecomposition, for example BPEL, and specifically data type.

• This tool does not consider the timestamps transition and time constraints on eachstate (time invariant) which is very important with the asynchronous web services.

T-Uppaal

T-Uppaal [97, 114] (or Uppaal-TRON) is a tool integrated into the Uppaal tool environ-ment. It performs model based black-box conformance testing of the real time constraintsof embedded systems. T-Uppaal is an online testing tool which means that it, at the sametime, both generates and executes tests event-by-event in real time. This tool uses timedautomata [6] as the formal model. Instead of simulation of environment as TorX, it needsan assumed environment modeling that is also a timed automata and it will check that animplementation conforms to this environment following tioco [117] theory. From a currentsymbolic state, T-Uppaal randomly chooses between one of three basic actions: either send arandomly selected relevant input to the IUT (Implementation Under Test), letting time passby some amount and observe the IUT for outputs, or reset the IUT and restart. Of course,

Page 42: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

28 Chapter 2. Preliminaries

the current state will be updated after each action. As the TorX, the system under test isattached to T-Uppaal via a test-adapter and considered as a black-box since its states cannotbe directly observed; only communication events via input/output channels. But we can notuse them for unit testing of a web service orchestration because:

• The input format of T-Uppaal is a timed automata, so its data types are poor. We cannot use them to model the SOAP messages.

• Web service works follow the session by sending a request to enable a new session and,in general this session finishes when a response is returned. So a test needs to startfrom its initial state to a final state before we can restart, while the algorithm of thistool is randomly chosen at each current state between sending an input, waiting for anoutput or restarting. Therefore, restarting may be chosen when we do not arrive at afinal state.

• Finally, T-Uppaal does not simulate the environment. It needs an environment modelas the inputs.

2.4 Modelling orchestration of web serviceMany formal methods are used for the validation, verification and test of an orchestration

of web services [2]. In recent years, many formal models have been proposed to model anorchestration specification of web services as the Automata, Petri Net, Symbolic TransitionSystem, Guard Automata, Web service Timed State Transition Systems etc. In this section,we discuss an overview about some recent formal models before selecting one for our formalmethods of this thesis. We can classify them into some following family: Automata, Petri Net,Symbolic Transition System. On each model, we also assess its advantages and disadvantages.

2.4.1 Automata

Automata (or Finite State Machine) is an abstract machine that takes a symbol as anevent and jumps or transitions, from one state to another according to a transition function.The transition function tells the automata which state to go to next given a current stateand a current symbol. From this model-based, many extended versions are proposed to adaptwith different objects. Here, we present some extended versions to model an orchestration ofweb services that is described by BPEL language.

Fu et al [64] use a Guard Automata (GA) to model a BPEL process and added to thetransition a guard that is directly queried from the messages or local variables by usingXPath. There are three transition types: local-transition t = (s1, g, s2) where g is a guard,receive-transition t = (s1, ?a, s2) where a is an incoming message and send-transition t =(s1, (!b, g), s2) where b is an outgoing message and g is a guard. This works focuses on analysisof interacting between web services more than internal logic of one service. So data flow ortime constraints are not yet considered. For instance, the guard on transition of <while> or<repeat> activities depends on the data update functions that are not used in this model.

The translation from BPEL specification to aDFA (i.e., annotated Deterministic FiniteState Automata) is proposed by Wombacher et al [147]. This formalism does not consider a

Page 43: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

2.4. Modelling orchestration of web service 29

set of variables on data and on clocks. The guard of the transition if it exists, is used fromthe set of messages. So, the internal logics as while, for and the data correlation were omittedbecause we do not have the data variables to control this. Moreover, we can not control adelay action or a deadline for an action because the clocks were not admitted in this model.

Kazhamiakin et al [89] introduce a formalism, called Web Service Timed State TransitionSystems (WSTTS), to capture the timed behavior of the composite web services. This for-malism considers the time constraints but the problem of data flow were omitted. It meansthat the data variables are not defined, so the data correlation management are not consid-ered. In this work, the authors only focus on check BPEL compositions against time-relatedrequirements more than all problems of conformance testing.

Zheng et al. [154, 155, 156, 157] give the formal definition for the static semantics andbriefly describe the dynamic semantics of a Web Service Automaton (WSA) that is an ex-tension of Mealy machines (one in two types of Finite State Machine), to fulfill the formalmodel requirements of the web service domain. In this model, the update function and guardcondition are admitted to the data flow. But the time constraints or time delays are omitted,so that the timed activities of BPEL as <wait>, <onAlarm> are not considered. This modelis used as the intermediate format and was coded by XML to be translated into the inputformat of Model-Checker SPIN or NuSMV.

A timed model, called Timed Extended Finite State Machine (TEFSM) that can modelmost of BPEL activities as time constraints, time delays, data correlation, the property oneach action etc, was proposed by Lallali et al [93, 94]. This formalism is closely related totimed automata [6] and permits carring out timing constraints, clocks, state invariants onclocks, property on transition, data variables and its domain. The authors also define a set ofrules to translate each BPEL activities based into a partial machine and composes of them bythe partial machine of the structural activities. An Intermediate Format (IF) [142] based onthe communicating timed automata is proposed as the input format of the test or verificationtools [95]. BPEL2IF [93, 96] is a tool to translate the BPEL specification into IF.

2.4.2 Petri Nets

Petri nets [151] provide an elegant and useful mathematical formalism for modelling con-current systems and their behaviors. A Petri net is a directed bipartite graph, in which thenodes represent transitions (i.e., events that may occur, signified by bars) and places (i.e.,conditions, signified by circles). The directed arcs describe which places are pre- and/or post-conditions for which transitions (signified by arrows). A number of extended Petri nets havebeen introduced to enhance its expressive capabilities. Among them are colored Petri nets,timed Petri nets, High-level Petri nets, more. This section shows some overview works ofmodelling BPEL-based web service composition using the Petri nets or its extension. Hinz etal [79] and Ouyang et al [120] defined a set of patterns translating a BPEL process to Petrinets. While Stahl [131] and Dong et al [53] use the High-level Petri nets (HPN) to model aBPEL process for analysis and testing. In these works, each BPEL activity, including basicactivities, structural activities and also exceptional activities as faults, events, compensation.BPEL2PN [130] is a tool to translate a BPEL process to the input format of the LoLA [127]tool that has been implemented for the validation of reduction techniques for place/transi-

Page 44: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

30 Chapter 2. Preliminaries

tion net reachability graphs. Yang et al [149, 150] used the colored Petri nets to model aweb service composition (orchestration or choreography), then used a CPNtools [50] to verifysome properties of web service composition as reachability, boundness, dead transitions, deadmarking, liveness, fairness, etc.

2.4.3 Other formals

Bentakouk et al [23] translated a BPEL process to a Symbolic Transition System (STS)where two main problems of BPEL are interested: input/output direction of the messages onthe different channels and its data variable domain while the guard on the data variable ofthe transition is protected. Foster et al [59] presented an approach to model and validate aweb service composition. The BPEL semantic is described by the Finite State Process (FSP)notation that represents a Labelled Transition System (LTS). A BPEL process is translatedto a FSP which is used as an input format of the model checker LTSA (Labelled TransitionSystem Analyser) to check the composition correctness. These models (STS and FSP) are onlyinterest in the flow control of service composition without considering its temporal aspect.Moreover, FSP does not describe all BPEL structure, for example, the correlation of themessage and the exceptions. A new model, named BFG (BPEL Flow Graph) which is anextension of CFG (Control Flow Graph) is proposed by Yuan et al [152] to represent a BPELprocess in a graphical model. Then concurrent test paths can be generated by traversing theBFG model, and test data for each path can be generated using a constraint solving method.Finally test paths and data are combined into complete test cases.

2.4.4 Discussions

In this section (§2.4), we have discussed many formal models for the orchestration of webservices. Each model has different advantage and disadvantage because they focus on thedifferent problems of testing. The works that focus on the automata model as aDFA, GA,WSTTS, WSA, TEFSM, allows us to verify the behaviour of an orchestration by the timeand data variable constraints. The Petri nets allows us to model the concurrent actions ofan orchestration, for example the activities in a flow activity of BPEL. While the SymbolicTransition System (STS) focuses on input/output direction of the messages on the differentchannels and its data variable domain. But the STS and Petri nets models do not considerthe time constraints. In our works, we use TEFSM of Lallali et al [93, 94] by adding a set ofcorrespondent variable domain to model an orchestration of web service.

2.5 Web services testing

As said earlier, web service is the application based to build the SOA application. Inthe last years, the community of software testing was interested in this domain. This sectiondiscusses some methods and tools that have been proposed to test a web service based (useonly WSDL-based specification) or a composite of web service using BPEL specification.

2.5.1 Testing from WSDL-based

Testing of web service from WSDL-based is a kind of black-box testing because only spec-ifications of input/output messages of the operations are available. There is no information of

Page 45: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

2.5. Web services testing 31

the internal logic, the relation between operations or data correlation between input/outputmessages of the operators etc. The methods that use this specification usually focus on theproblems: operation flow, existence checking of operations, the validation of input/outputmessage types, fault management, robustness etc.

In the method of X. Bai et al [11], the test case generation method from WSDL-basedincludes four levels: test data generation, test operation, operation flow and test specification.Test data is generated from the data type of the message by examining the XML schema ifit is a complex type and generating a correspondent value for a simple data type. The datageneration may be used from a set of default values, a pattern list or random. Next thesemessages are used to test the operations. The test of operation flow is generated based ondependency analysis between operations by input/output parameters. Finally, the generatedtest cases are encoded in XML called Service Test Specification (STS) to execute to ServiceUnder Test. STS takes the concepts of WSDL including operations, part, message, input andoutput, reflecting a nature mapping between WSDL elements and test elements.

From this WSDL-based, Salva et al [125, 126] are interested in the existence checking ofoperations, the correction of output message types, fault management and robustness. Foroperation existence, each operation is called with random parameter values respecting theWSDL file. An operation exists if it returns either a response type as described in the WSDLfile or a SOAP fault whose cause is different from the client and the endpoint reference notfound. This first cause means the operation is called with bad parameter types, the secondcause means that the operation name does not exist. Then the operation robustness is tested ifit exists. This technique is implemented in the tool WS-AT (Web Service Automatic Testing).

C. Keum et al [90] generate the test cases which are operation flows using the ExtendedFinite State Machine that is constructed from the pre-conditions and post-conditions of pa-rameter values. J. Offutt and W. Xu [119] presented a new approach to testing the interactionof Web services based on data perturbation. Existing XML messages are modified based onrules defined on the message grammars, and then used as tests. Data perturbation uses twomethods to test Web services: data value perturbation and interaction perturbation.

Test tools for web services-based

Jambition [60] is a tool which automatically tests web service-based based on STS (SymbolicTransition System) specification. It is integrated in the framework PLASTIC [28]. This toolfocus on the same problem as X. Bai et al [11]. But the testing approach of Jambition israndom, on-the-fly and uses ioco relation.

SOAPUI [58] is a basic tool used to execute a web service, verify the syntax and semanticof WSDL file, and also verify the SOAP messages. It generates automatically the SOAPtemplate for input messages from XML schema. It allows us to simulate the partner servicesin the case of a composite of web services via the MockServices. WS-TAXI [16] is a toolused to generate random values for SOAP template of SOAPUI, while TestMaker [122] is aplatform for functional testing, regression, load and performance testing, and business servicemonitoring.

Page 46: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

32 Chapter 2. Preliminaries

2.5.2 Testing from BPEL specification

Nowadays, BPEL is a popular language for specification and development of web servicecomposition. So many testing approaches for web service composition use BPEL as an inputformat before translating to a formal model. This section discusses the test approaches:structural testing and functional testing. In the case of functional testing of web servicecomposition, we called gray-box to distinguish with black-box of web service-based because weknow the interaction between services in the composite and we can capture the communicatingmessages between them. This approach is usually used when we do not know the internalactions via its specification as Abstract BPEL, BPMN, UML etc. The structural testingmeans that we verify the structure of application using the source code, for example BPELexecutable process. There are two types of this approach: static analysis via verification(called analysis and verification) and dynamic analysis via test execution (called white-boxtesting).

Analysis and verification

Model-Checking is a technique of formal verification that allows us to automatically analyzewhether a complex system satisfies certain properties or not. In web service composition,this technique is also used in many methods to analyze a BPEL specification. To use thisapproach, firstly BPEL is translated directly or via an intermediate format before translatinginto an input format of the Model-Checking. Next, a set of constraints are defined to verifysome properties. Finally Model-Checker is used to analyze these properties. The EA4B [72](Execution Analysis tool for BPEL) is a analysis tool for BPEL process, integrating of theWSAT [65] tool and the Model-Checker SPIN [81]. The WSAT is a tool that generate thePromela language from the BPEL specification by using the Guard Finite State Automataas intermediate format. This tool also analyses the synchronizability before generating thePromela. Many works also use the Model-Checker for verification of web service compositiondescribed in BPEL as Foster et al [59], Nakajima [116].

Yang et al [149, 150] firstly translated the BPEL specification to the colored Petri nets(CPNs) then used a CPNtools [50] to verify some properties of BPEL as reachability, bound-ness, dead transitions, dead marking, liveness, fairness, etc. The complete transformationfrom BPEL to Petri nets is given by Ouyang et al [120]. Hinz et al [79] describe the toolBPEL2PN that implements the transformation when abstracting from data. The resultingPetri net can be verified using the tool LoLA which stands for a Low Level Analyzer, supportsthe verification of standard properties of Petri nets, like, for example, determining if a Petrinet contains a deadlock, and the verification of properties expressed in the logic CTL.

Ouyang et al introduce the WofBPEL [121] tool for analysis:

• Unreachable activities;

• Potentially conflicting "message receipt" actions;

• Determining, for each action in a BPEL process definition, which messages might even-tually be consumed by the process after this action has been performed. This informa-tion can be used by a BPEL engine in order to detect which messages in the inboundqueue may be discarded and thus optimize resource consumption.

Page 47: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

2.5. Web services testing 33

White-box testing

When we test a service, we receive only the input and output message. Using the white-boxtesting technique, we can verify the correction of output data because we know its internalactions via source code. While the gray-box testing technique does not allow us to do this.Several works [102, 103, 110, 111, 152] focus on verify the structure of BPEL via test execution:

Li et al [102] proposed a framework for unit testing of a BPEL process. This frameworkincludes: a test architecture based on simulating the partners, a lifecycle management schemaand a test design outline. For the test architecture, the authors proposed two types: central-ized and distributed. The centralized architecture is used if the Process Under Test (PUT)has only one partner, or if we choose to simulate the joint behavior of all the partner processesby a single test process. The distributed architecture is used if we want each test process tosimulate a PUT partner. In this case, the communication (or synchronization) between testprocesses can be used depending on a sequential or parallel execution logic of PUT.

Mayer [110, 111] developed a framework, named BPELUnit [34], that also focuses onunit testing of BPEL process and uses Li’s distributed architecture [102]. Each partner issimulated by one test process, but there is no synchronization between these test processes.This framework allows us to execute, it means send the input (a SOAP message or a faultmessage), receive and verify the output, one or several test cases described by XML format.The interaction with BPEL process is synchronous or asynchronous. It supported verify thetime constraints of the outputs, time delay of the inputs but it does not generate automaticallythe test case.

Yuan et al [152] proposed a graph-search based approach to BPEL test case generation,which effectively deals with BPEL concurrency semantics. Firstly, BPEL is translated toBPEL Flow Graph (BFG) which is an extension of CFG (Control Flow Graph) to represent aBPEL program in a graphical model. Next, traverse the BFG to generate test paths. A BPELtest path is a partially-ordered list of basic activities that are executed during a specific testrun. BPEL activities in a test path are classified into three types: one is "input-type" (receive,the receiving direction of 2-way invoke, etc); another is "output-type" (the sending directionof 2-way invoke, reply, 1-way invoke, etc); the third is "dataHandling-type" (assignment,etc). After test paths are found, test data of "input-type" should be generated based on theconditions of boolean expressions on the path or random data generation to make the testpath executable. At this step, "dataHandling-type" is removed and output-type logic shouldbe manually prepared in a form of either exact data values or invariable assertions, which arecommonly called "test oracles". The authors called this is the abstract test case because theymust be transformed to an input of a test execution tool.

Yan et al [148] method of BPEL test case generation, which is based on concurrent pathanalysis. This method first uses an Extended Control Flow Graph (XCFG) to represent aBPEL process, and generates all these sequential test paths from XCFG using either thebasic path coverage proposed by the authors or the test coverage criterion defined by theuser. These sequential test paths are then combined to form concurrent test paths. Theauthors also provide a technique to process constraints to generate test data for test paths.This approach solves not only the test case generation, but also the test case for concurrent

Page 48: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

34 Chapter 2. Preliminaries

flow, but the time constraints are not yet considered.

Zheng et al [156, 157] proposed a framework to test a composite of web services describedin BPEL. The BPEL process is modeled by an intermediate formal model WSA (Web ServiceAutomata) (see 2.4.1). Next, this model is translated to SMV (the input format of Model-Checker NuSMV) or Promela (the input format of Model-Checker SPIN). NuSMV or SPINis used as the test case generation tool where the test coverages (i.e., all-states, all-transitionsor all-du-paths) are described as the temporal logic (liked LTL or CTL). Garcia-Fanjul etal [66, 67] use also the Model-Checker SPIN to generate the test cases, but the BPEL processis translated directly to Promela instead of using an intermediate formal model.

Gray-box testing

Gray-box testing of a composite of web services means that we test only the interactions ofthe web services via the communicating messages without the internal actions. Dependingon the available information specification, either the interaction of all web services is testedor only the interaction of the client and service under test is tested. Lallali [93] proposed aframework to unit testing, it means that only one process in a composite is tested isolatedwith its partners. The author used BPEL as a specification (not code source) of composite,the IF is used (see 2.4.1) to model this specification and used TestGen_IF [49, 93] tool togenerate the timed test cases based on the test purposes. Finally, these timed test cases areexecuted using the BPELUnit [34] framework by simulating all partner services.

Bentakouk et al [23] have proposed a framework to automated test a web service orches-tration described by BPEL. The authors firstly translated BPEL into a formal model, namedSymbolic Transition System (STS), then the Symbolic Execution Tree (SET) is computedfrom this STS. While the SET is computed, the feasible paths are considered as a conditionto limit the state explosion by evaluating the existing variables and cut off the path if theguard on transition does not satisfy. The path length is used as a criterion to cover a set of ex-ecution paths which are finally run by a test oracle against the orchestration implementation.The test cases that are generated by this approach can be used by a gray-box testing approachbecause these test cases include a sequence of communicating input/output messages amongclient and SUT, or SUT and its partners. But the authors have used a black-box testingarchitecture to run these test cases against the orchestration implementation by skinning thecommunicating messages between SUT and its partners. Moreover, this method does notconsider the time constraints.

2.5.3 Passive testing

In recent years, many methods and tools are proposed and developed for passive testingof a web service (including a composite of web services) [54, 14, 13, 12, 45, 101]. These worksfocus on either checking the order of messages and/or its occurrence time on a trace file togive a verdict [45, 101, 100] or proposing a method for dynamic statistics [14, 12] of someproperties of web services.

Dranidis et al [54] propose the utilization of Stream X-machines for constructing formalbehavioral specifications of Web services. The authors also present a runtime monitoring and

Page 49: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

2.6. Conclusion 35

verification architecture and discuss how it can be integrated into different types of service-oriented infrastructures. But the authors do not present an algorithm or a tool to verify anexecution trace using the Stream X-machines specification of web services.

Baresi et al [14, 13] present a monitoring framework for BPEL orchestration which is ob-tained by integrating two approaches namely Dynamo and Astro. These approaches are usedfor dynamic statistics of some properties of BPEL process from single instance or multi in-stances. These works focus on the behavioral properties of composition processes expressed inBPEL rather than on individual Web services. Moreover, an assessment (a verdict true/false)about service is not considered in this work.

Cavalli et al [45] propose a trace collection mechanism for SOA by integrating moduleswithin BPEL engine and a tool [45, 108] that checks offline an execution trace. This approachuses the Nomad [51] language to define the security rule. But it does not allow us to checkreal-time (i.e., "online") whenever a message happened. Moreover, this work does not considerthe data correlation between the messages in the rules.

The works of Li et al [101, 100] present the pattern and scope operators as the rule-based to define the interaction constraints of Web services. The authors use the finite stateautomata (FSA) as semantic representation of interaction constraints. In this approach, thevalidation process runs in parallel with the trace collection. This approach is limited by thepattern number. Moreover, this work does not consider the time constraints.

2.5.4 Other works

There are several works that focus on the robustness testing. Looker et al [104, 106, 107]interested in the domain of assessing the dependability of web services by fault injection.The author also provided a fault injector tool, named WS-FIT [105] (Web Service – FaultInjection Technology) that allows us to inject the faults at network level to be used to test theweb services. The communicating faults that are injected into a web service are: duplication,omission, extra, delay and ordering of SOAP messages.

Bessayah et al [31] introduced a fault injection tool for testing web service composition,named WSInject. This tool allows us to inject the fault into not only the communicatingmessages between client and SUT, but also the messages between SUT and its partners.WSInject supports to inject the fault at two levels: SOAP interface faults and communicationfaults. At SOAP interface level, the fault is injected by modifying the message contents, forinstance, changing the values, deleting a field in the message, etc. At communicating faultslevel, the authors consider the following type of fault injections: corruption of parametervalues or of message structure, delaying requests/responses, message deletion and messagereplication.

2.6 Conclusion

At the begining of this chapter, we presented an overview of web service standards asWSDL, SOAP, UDDI, and a popular language (i.e., BPEL) that is usually used to define a

Page 50: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

36 Chapter 2. Preliminaries

composite of web services in two ways: abstract (choreography) and executable (orchestra-tion). Atfer, we introduced an overview of software testing classification types, some testingapproaches and test tools for conformance testing and for passive testing. This chapter alsodiscussed the existing works on web service testing as the start point of our thesis. Thedifferent formal models of web service orchestration is discussed as Guard Automata, WSA,WSTTS, STS, TEFSM, Petri Nets, BPEL Flow Graph etc. Many testing methods of webservices that focus on different types of test as black-box, white-box, gray-box or verificationtechniques are presented. We also introduced in this chapter some works on passive testing,monitoring and fault injection.

In the next chapter, we will propose some conformance testing approaches for web services,including unit and integrated testing approaches. For unit testing, we propose two methods:offline where the test activities as test case generation, test execution and verdict assignmentare executed in sequential and online where the test activities are executed in parallel.

Page 51: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

Chapter 3

Testing Approaches for WebServices

Contents3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373.2 Formal model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

3.2.1 Timed Extended Finite State Machine . . . . . . . . . . . . . . . . . 393.3 Unit testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

3.3.1 Test architectures . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433.3.2 Conformance relation . . . . . . . . . . . . . . . . . . . . . . . . . . 443.3.3 Offline approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463.3.4 Online approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

3.4 Integrated testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623.4.1 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623.4.2 Checking algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

3.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

This chapter presents the main contributions on conformance testing. It is composed ofthe approaches for unit testing and integrated testing. For unit testing, we have proposed twoapproaches: offline and online that can apply to test a web service based or an orchestration.Offline testing generates firstly abstract timed test cases from a formal model. Then theconcrete timed test cases are generated and executed against the implementation under test,while online testing generates, executes in parallel and randomly selects the timed test case.For integrated testing, we focus only on an orchestration. All our approaches are based onthe TEFSM model defined by Lallali [94]. Some works of this chapter have been publishedin the ICST’2010 [42], ICIW’2010 [40] and Service Congress 2009-I [41].

3.1 IntroductionWeb services are the elements based on a SOA application. A web service can be a SOA

application. We can reuse, integrate other web services to have a new service, called web

37

Page 52: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

38 Chapter 3. Testing Approaches for Web Services

service composition. The latter is executed by interacting with other services, called partnerservices, to accomplish its workflow. The Quality Of these Services (QoS) effects the quality ofthe SOA application. Nowadays, the software testing community are interesting in proposingthe test method for web services, including web service composition. Many testing methodsfor web services were analyzed in section §2.5, chapter §2.

The following characteristics make the difference between a web service composition andother applications or systems: (i) the interactions are synchronous or asynchronous, (ii)the complexity of communicating message type, (iii) faults and events management, (iv)the data correlation between messages, (v) session management, etc. It is important toconsider all these characteristics when testing this composition. A formal model of web servicecomposition must exist with at least: (i) the time constraints to validate the synchronous orasynchronous interactions, (ii) set of variables to model the communicating messages, faults,events, (iii) a finite set of final state to indicate that a session finishes. The formal model ofapproaches or tools that were analyzed in sections §2.3.2 or §2.3.4 is not composed of at leastthree problems or the data type of variables does not allow us to model the complexity ofcommunicating message. Moreover, most of the test generation methods that were appliedon these models do not consider the problem of update data variable while generating testcase. So that, these methods can not generate the exact test cases if the model exists theloops that are controlled by the variables.

In general, web services-based testing is functional or black-box testing where test casesare designed based on the interface specification of the web services under test, not on itsimplementation. In this case, the internal logic does not need to be known, whether it’scoded in Java, C# or BPEL. Problems such as the operation flow, the existence checking ofoperations, the validation of input/output message types, fault management, robustness, areusually considered, while most of the testing approaches for a web service orchestration focuson white-box testing or static analysis from a BPEL source code [59, 102, 110, 116, 149, 152,156]. But the source code of a web service orchestration is not alway available for tester. Inmany case, we test a web service orchestration using only its specification and input/outputmessages that are generated by its implementation. But there is a difference between a webservices-based testing and a web service orchestration testing. With orchestration testing,we not only have the input/output messages between tester and implementation under test(IUT) as web services-based testing, but also the communicating messages between IUT andits partners. We called this approach gray-box testing to distinguish it from black-box testingof web services-based. The approaches in [23, 93] focus on gray-box testing of a web servicecomposition. But, [23] does not consider the time constraints while [93], the authors do notconsider that the time occurrence of an output must belong to an allowed interval time. Onlythe time delay before sending an input and action delay of SUT are considered.

3.2 Formal model

Many formal models have been proposed to model a web service orchestration, as an-alyzed in section 2.4. There are also some timed models that are proposed at LaBRI asETIOA [24, 26] (Extended Timed Input/Output Automaton) to model a communicating sys-tem. TEFSM (i.e., Timed Extended Finite State Machine) is a formal specification that is

Page 53: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

3.2. Formal model 39

proposed by Lallali et al [94] to model a web service orchestration described by BPEL lan-guage. This formalism is closely related to timed automata [6] and permits to carry out timingconstraints, clocks, state invariants on clocks, property on transition and data variables. Thisformalism allows us to model the complete activities of BPEL as basic and structural ac-tivities, <faultHandler>, even though: time activities as <wait> and <onAlarm>, datacorrelation. Specially, a set of rules that are defined to translated a BPEL process to TEFSMwas introduced [93, 94]. In the context of WebMov project, we use this model as input forour approach.

3.2.1 Timed Extended Finite State Machine

Definition 3.1. (TEFSM): A TEFSM M is a tuple, M = (S, s0, F, V, D|V |V , Eτ , C, Inv, T)where:

• S = {s0, s1, ..., sn}, is a finite set of states;

• s0 ∈ S is an initial state;

• F ⊆ S is a set of final state;

• V is a finite set of data variables;

• D|V |V is the data variable domain of V;

• Eτ is a finite set of events. Eτ is partitioned into:

– Input event ?a (EI);– Output event !b (EO);– τ is the internal event.

• C is a finite set of clocks including a global clock gc (never reset);

• Inv: S 7→ Φ(C) is a mapping that assigns a time invariant to states;

• T ⊆ S × Eτ × P (V ) ∨ Φ(C)× 2C × µ× S is a set of transitions where:

– P (~v)&φ(~c): are guard conditions on data variables and clocks;– µ(~v): Data variable update function;– X ⊆ 2C : Set of clocks to be reset;

The clocks

A clock is a variable that allows to record the passage of time. It can be set to a certain valueand inspected at any moment to see how much time has passed. Clocks increase at the samerate, they are ranged over R+, and the only assignments allowed are clock resets R ⊆ C inthe form c:=0 (It is denoted by R 7→ 0). A set of clocks (c0, c1, ..., cn ) will be denoted by ~c.

Definition 3.2. (Timed constraint): For a set C of clocks, the set of timed constraints Φ(C)is defined on C by the grammar: Φ := Φ1|Φ2|Φ1 ∧ Φ2,Φ1 := c ∼ m,Φ2 := n ∼ c where c is aclock of C, (n, m) are two natural numbers, and ∼∈ {<,≤, >,≥,=}.

Page 54: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

40 Chapter 3. Testing Approaches for Web Services

The variables

For a set V of variables, The data domain of each concrete variable v ∈ V can be the simpletype (R: real, Z: integer, B: boolean, etc) or complex type (structure, interval, enumerationarray, etc). An universal domain D|V |V is used to represent the abstract data types of V. Aset of variables (v0, v1, ..., vm ) will be denoted by ~v.

Definition 3.3. (Variable constraint): For a set V of variables, the set of variable constraintsP(V) that is defined on V is a recursive constraint expressions following the grammar:

1. all propositions that are defined on variables, true or false are a constraint expression;

2. vi ∼ dv is a constraint expression where vi ∈ V , dv ∈ Dv is a value belonging to thedomain Dv, and ∼∈ {<,≤, >,≥,=, 6=};

3. if p and q are the constraint expressions, then ¬p, p ∧ q, p ∨ q are also the constraintexpressions;

4. all expressions are built by (1), (2) and (3);

Definition 3.4. (Variables update function): The update of data variables is represented bythe action ~v := µ(~v). It is denoted by [~v := ~x].

Transitions

Each transition is annotated that represents an edge from state si to state sj with the set ofguards, actions, data variable updates and clock resets.

Definition 3.5. (Transition): A transition t = (s l→ s′) ∈ T is associated to a triple l =<e, cond, [~v := ~x;R 7→ 0] > where:

• cond = P (~v)&φ(~c) is a guard over clocks and data variables;

• e ∈ Eτ is an event;

• ~v := ~x is a set of data update function;

• R 7→ 0 is a set of clocks to be reset.

The semantics of TEFSM

The semantic of TEFSM is defined by labeled transition system (LTS). Two transition typesare considered: delay transition and action transition.

• A delay transition does not change the state and the machine does not execute anyaction, but increments the current value of the clock. The priority of the delay transitionis lowest, it means that if any other transitions going out from the same source state,these transitions will be executed first.

Page 55: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

3.2. Formal model 41

• An action transition indicates that it will be executed immediately (in this case, itspriority is higher more than the priority of the delay transition) if an event (e ∈ Eτ )arrives and the condition is valid. Then the machine follows the transition by executingaction a, changing the current values of the data variables by the action [~v := ~x],resetting the subset clocks R, and moving in the next state.

Definition 3.6. (The semantics of TEFSM): For a TEFSM M = (S, s0, F, V, D|V |V , Eτ , C,Inv, T), the semantic of M is defined by a Labeled Transition System as following: SemM =(L, l0,Γ,⇒) where:

• L ⊆ S × R|C|+ ×D|V |V is a set of semantic states (s, u, v) where:

– s is a state of the machine M ;– u is an assignment that satisfies one invariant of the state s (i.e., u ∈ Inv(s));– v is a data value represented by data variable valuation at state s.

• l0 = (s0, u0, v0) is the initial state;

• Γ = Eτ ∪ {d|d ∈ R+} is the labels set where d corresponds to the elapse of time;

• ⇒⊆ L× Γ× L is the transition relation defined by:

– action transition: Let (s, u, v) and (s′, u′, v′) be two states. Then (s, u, v) a⇒(s′, u′, v′) iff ∃t = s <e,cond,[~v:=~x;R 7→0]>−−−−−−−−−−−−−−→ s′ ∈ T such that∗ u ∈ cond, u′ = u[R 7→ 0], u′ ∈ Inv(s′);∗ v′ = v[~v := ~x];

– delay transition: Then (s, u, v) d⇒ (s, u⊕ d, v) iff ∀0 ≤ d′ ≤ d, u⊕ d′ ∈ Inv(s);

Let M = (S, s0, F, V, D|V |V , Eτ , C, Inv, T), be a TEFSM, SemM = (L, l0,Γ,⇒) be asemantics of M, we have some following definitions:

Definition 3.7. (Deterministic): M is deterministic (noted D-TEFSM) if ∀l = (s, u, v),∀α ∈Eτ ∪ {d|d ∈ R+}, (l

α⇒ l′ ∧ l α⇒ l′′)⇒ (l′ = l′′)

Definition 3.8. (Timed event): A timed event over Eτ is a pair a/d such that a ∈ Eτ andd ∈ R+.

Let Σ = (E × R+) as the set of timed events and Στ = (Eτ × R+) as the set of timedevents including internal actions (i.e. Eτ = E ∪ {τ}).

Definition 3.9. (Timed sequence): A timed sequence over Στ σ = a1/d1, a2/d2, ..., an/dn isa number of timed observable events. Seq(Στ ) denotes the set of timed sequences.

Page 56: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

42 Chapter 3. Testing Approaches for Web Services

Definition 3.10. (Run): A run r of machine M over σ is a finite sequence of the form(s0, u0, v0) a1/d1=⇒ ...(sn−1, un−1, vn−1) an/dn=⇒ (sn, un, vn). This execution is noted by Runσ =(s0, u0, v0) σ⇒ (sn, un, vn) and Run(M) denotes a set of run of M.

Let Σ′ ⊆ Στ and σ ∈ Seq(Στ ) be a timed sequence. πΣ′(σ) denotes the projection of σ toΣ′ obtained by deleting from σ all actions not present in Σ′.

Definition 3.11. (Timed traces): The observable timed traces of M are defined by: Traces(M)= {πΣ(σ)|σ ∈ Seq(Στ ) ∧ (s0, u0, v0) σ⇒ (sn, un, vn).

Definition 3.12. (Reachability): A state s is reachable in M if and only if ∃(s0, u0, v0) σ⇒(sn, un, vn) ∈ Run(M) such that s = sn.

Definition 3.13. (The correct TEFSM): M is called a correct machine if and only if ∀s ∈ SM ,s is reachable.

Figure 3.1: An example of TEFSM

Example 3.1. The figure 3.1 illustrates a simple example of TEFSM where s0 is the initialstate, s5 is the final state, t is a local clock and:

• S = {s0, s1, s2, s3, s4, s5}, the time invariant of s3 is t <= 5 and the rest of states areundefined;

Page 57: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

3.3. Unit testing 43

• V = {x, y, z};

• D|V |V = {int, int, int};

• E = {?a, !b, !c, ?d, ?e, !f};

• T = {t0, t1, t2, t3, t4, t5, t6, t7} where ( the symbol _ denotes the internal event):

– t0 = (s0, <?a, [], {x := a; t := 0} >, s1);– t1 = (s1, < _, [x > 10], {} >, s0);– t2 = (s1, <!c, [5 ≤ x ≤ 10], {} >, s2);– t3 = (s1, <!b, [x < 5], {y := x+ 2; t := 0} >, s3);– t4 = (s2, <?d, [], {z := x+ d} >, s4);– t5 = (s3, <?e, [t < 5], {} >, s4);– t6 = (s3, < _, [t == 5], {} >, s5);– t7 = (s4, <!f, [], {z := z + y} >, s5);

3.3 Unit testingUnit testing is used to find bugs on a single web service based or an orchestration. In

the case of an orchestration, it means that we test only the main process without interac-tion with its real partners to guarantee the correct input condition. Conformance testingestablishes that the implementation respects either its specification or not. It is a type offunctional testing of black-box (in the case of web service based) or gray-box (in the case ofthe orchestration) where the internal behaviour is not available. The conformance of SUTis tested using only messages communicating between SUT and its environment. We knowthat testing conceptually consists of three activities: test case generation, test execution andverdict assignment (or test analysis). Depending on the test approach, these activities can beapplied in parallel (called online approach) or in sequential (called offline approach). Bothwill be discussed in this section. Before discussing them, we introduce the test architectureand the conformance relation which we use for our approaches.

3.3.1 Test architectures

A web services orchestration is a process that reuses and integrates existing web services.It cannot run without its partners (i.e., its environment), even though we want to test onlyit. So we must simulate all its partners when we want to test this process. A test archi-tecture describes how a service under test (SUT) communicates with its environment andwhat information is available for tester? Test architectures for unit testing of web serviceorchestration that were proposed by Li et al [102] are composed of centralized architectureand distributed architecture. In our works, we used a centralized test architecture that wascomposed of many tester processes and one controller process to co-ordinate these testers. Inthese testers, there is one that plays the role of the client (or the service consumer), othersplay the role of simulating the partner services. In this architecture, the roles of the controllerare: generating the test cases (the input event and its data), co-ordinating the tester to sendthe inputs to SUT and to receive the outputs from tester, verifying the outputs and giving

Page 58: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

44 Chapter 3. Testing Approaches for Web Services

the verdict. The role of each tester is very simple, sending a message to SUT when it receivea request from the controller and forwarding the response of SUT to the controller. Thisarchitecture is also used to test a web service based (there are no the interactions betweenSUT and its partners or we do not know them) by existing only one tester that represents theclient. From here, a SUT means a web service orchestration and gray-box is called instead ofboth. The figure 3.2 shows an abstract model of SUT with two partners and one client (top),and the correspondent test architecture (bottom).

Figure 3.2: An abstract model of SUT and its test architecture

3.3.2 Conformance relation

Conformance here defines a relation between an implementation model and its specifi-cation model. In the context of gray-box testing of a timed system, the term, the timedobservable traces are used instead of the implementation model and the conformance relationdefined by this term. In our conformance testing approach of web service orchestration mod-eling by a D-TEFSM, we propose a timed conformance relation for D-TEFSM that is denotedby xtioco (extended timed input output conformance relation). This notation is extended by

Page 59: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

3.3. Unit testing 45

ioco [136] with data variables and time invariant on state. In order to define conformancerelation, we define a number of operators. Let M = (S, s0, F, V, D|V |V , Eτ , C, Inv, T), be aD-TEFSM, SemM = (L, l0,Γ,⇒) be a semantics of M and σ ∈ Run(M), M after σ is a states that is reachable in M. Formally:

M after σ = {(s, u, v) ∈ LSemM|∃σ ∈ Seq(Στ ) such that (s0, u0, v0) σ⇒ (s, u, v)}

Let l = (s, u, v) ∈ LSemM, out(l) is a set of all observable "output events" that can occur

when system is at state s. Formally:

out((s, u, v)) = {a ∈ EO|sa−→ }

The extended timed input output conformance relation, denoted xtioco, is definedby the following definition.Definition 3.14. (xtioco): Let MS be a D-TEFSM that is a specification model of systemand MI be its implementation:

MI xtioco MS ⇐⇒ ∀σ ∈ Traces(MS) such that out(MI after σ) ⊆ out(MS after σ)and inv(MI after σ) ⊆ inv(MS after σ)

Figure 3.3: An example of xtioco relation of D-TEFSMs

Example 3.2. The figure 3.3 shows an example of xtioco and non-xtioco between two im-plementations I1, I2 and its specification S. It is easy to recognize that I1 does not conformto S following xtioco because:

• out(I1 after {(?a, a = 6, 0) → (!c, c = 1, 4) → (?d, d = 10, 2)}) = {!f} 6= out(S after{(?a, a = 6, 0)→ (!c, c = 1, 4)→ (?d, d = 10, 2)}) = {!g}

While I2 conforms to S following xtioco because the difference is allowed, ~v is the datavariables where va ≥ 5:• out(I2 after {(?a,~v, 0) → (!c,~v, 4) → (x = 3, ~v, 3)}) = {!e} = out(S after {(?a,~v, 0)→ (!c,~v, 4)→ (x = 3, ~v, 3)}) = {!e} and {1 < x ≤ 2} ⊂ {x ≤ 2}

Page 60: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

46 Chapter 3. Testing Approaches for Web Services

3.3.3 Offline approach

Offline testing is the approach which test activities (test case generation, test execution andverdict assignment) are applied in sequential. Firstly, the abstract test cases are generatedby a tool. Next, the concrete test cases are derived by generating the inputs data of theabstract test cases using its data type. Finally, these concrete test cases are executed usinga test executor by sending the inputs, collecting the outputs and analyzing it to produce averdict. Because TEFSM is a timed extended model with a set of data variables and timeconstraints, there are some problems that we must consider when we test: control flow, dataflow and time constraints between inputs and outputs. In this section, we discuss what is atest case for TEFSM (called timed test case) and how to generate the timed test case froma TEFSM model. Finally, how these timed test cases are executed against the SUT to checkthe conformance of an implementation with its specification. Figure 3.4 resumes our testingapproach for web services.

Figure 3.4: Offline testing approach for web services

Page 61: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

3.3. Unit testing 47

Timed test case

A test (or test case) is an experiment performed on the implementation by a tester. Thereare different types of tests, depending on the specification of system under test. Here, wefocus on tests that can measure precisely the delay between two observed actions, and canemit exactly an output at any point in time after sending one or several inputs. A (abstract)timed test case of TEFSM must indicate if we send an input with some conditions on datafields or do a synchronous time delay then which possible outputs that we will receive after aduration satisfying a set of time constraints. Executing a test case allows us to produce oneof three verdicts: pass, fail or inconclusive. Because some outputs may be generated aftersending an input, so we need to indicate which one is the expected output, it means what isour test purpose. We call a sequence of transitions on the path from the initial state to finalstate of TEFSM be a test purpose. Our abstract timed test case will be generated based onthis test purpose.

Definition 3.15. (Test purpose): Let M be a TEFSM, a test purpose of M, say Tp is deter-ministic TEFSM, over EI ∪ EO, with a distinguished non empty of states. This set of statesis denoted by Accept(Tp), Accept(Tp) ⊂ S(Tp) ∧ LastAction(Tp) ⊆ LastAction(M).

Figure 3.5: Test purpose example (t is a clock)

Definition 3.16. (Abstract timed test case): An abstract timed test case is a tuple TC =(S, I,O,D, Tr, s0, SI , SO, SD, SU , SP , C, Inv) where S is the set of finite states, I, O and Dare disjoint sets of input, output actions and time delays, is respectively, Tr ⊆ S × I ∪ O ∪D × P (I) ∨ Φ(C)× 2C × S is the transition relation, s0 ∈ S is the initial state, and the setsSI , SO, SD, SU , SP ⊆ S are a partition of S. C is a finite set of clocks. The transitionrelation and the sets of states fulfill the following conditions:

• SI is a set of input states, s0 ∈ SI and ∀s ∈ SI ,∃!t = (s a→ s′) ∈ Tr such that a ∈ I.Moreover, 6 ∃d ∈ D and 6 ∃s′ ∈ S such that s d→ s′ ∈ Tr.

• SO is a set of output states, and ∀s ∈ SO,∀o ∈ O,∃!s′ such that s o→ s′ ∈ Tr. Moreover,6 ∃i ∈ I, 6 ∃d ∈ D and 6 ∃s′ ∈ S such that s i→ s′ ∈ Tr or s d→ s′ ∈ Tr.

• SD is a set of time delay states, and ∀s ∈ SD,∃!t = (s d→ s′) ∈ Tr such that d ∈ D ands′ /∈ SD. Moreover, 6 ∃i ∈ I and 6 ∃s′ ∈ S such that s i→ s′ ∈ Tr.

• SP ∪SU are two sets of pass states and inconclusive states that are terminal. Besides,∀s ∈ SP ∪SU , 6 ∃a ∈ I ∪O, 6 ∃d ∈ D and 6 ∃s′ ∈ S such that s a→ s′ ∈ Tr or s d→ s′ ∈ Tr.

• Inv: SO → Φ(C) is a mapping that assigns a time invariant to output states.

• P (I)∨Φ(C) are the constraints on the data value and on the clocks of the input actions.

• X ⊆ 2C are set of clocks to be reset.

Page 62: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

48 Chapter 3. Testing Approaches for Web Services

Example 3.3. The figure 3.6 shows two abstract timed test cases whose the semantics ofthe case on the left are: if we send an input i1 to SUT, then the expected output is o1. If theoutput is o”1, there is no conclusion (inconclusive verdict), if the ouput is anything (i.e., noto1 or o”1) a fail verdict is produced. Next, if o1 arrives, we continue to send another inputi2 and at this moment the clock t is reset (t ↓). Finally, if we receive the output o2 within aduration less than or equal to 5, a pass verdict is assigned. If anything else is received, thistest case will return a fail verdict.

Figure 3.6: Examples of abstract timed test cases

Definition 3.17. (Timed test case): A timed test case is an abstract timed test case whereeach input action, its data values and its time delay are fixed.

Test case generation

Many test case generation methods and tools were introduced in sections 2.3.2 and 2.3.4.Each approach focused on different models and used the different input conditions dependingon the test purpose to generate the different types of test case or timed test case. A testpurpose that is defined by testers describes a special function of SUT. It is used effectively toderive a test case in particular for the large models or test in context. Using the test purposessometimes allows us to limit the problem of state explosion because we only consider onepart of the model. It demands that the testers must indicate which cases they want to test.But it is not easy to know how many cases where we must test to conclude that an IUTconforms with its specification. In the case of web service orchestration, using the sequenceof input/output actions and the constraints of a TEFSM’s path as a test purpose allows usto obtain all possible abstract timed test cases over a TEFSM and, after executing this setof test cases, we can affirm that an implementation under test (IUT) either conforms withits specification or not. We introduce here our method to generate the timed test cases. It iscomposed of three steps:

1. Building a coverage tree from TEFSM model to obtain a set of possible paths (eachpath is started from the initial state to a final state). Then, we pick the input/output

Page 63: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

3.3. Unit testing 49

transitions (i.e., the visible actions), including its constraints on the data variables andon the clocks, of each path to build a test purpose.

2. These test purposes are applied against the specification to generate the abstract timedtest cases. It means that we verify the reachability of each state on the path and addthe inconclusive states.

3. Finally, using the abstract timed test case and WSDLs file, the concrete timed test caseis derived.

Step 1: generating all possible paths: We propose to use the path coverage criteriato cover all possible paths, where each path is started from the initial state to a final state,through TEFSM. The paths satisfying this coverage can be constructed in terms of simplebreadth or depth first search over TEFSM. In [91], a coverage tree is built by using all statecoverage or all transition coverage. The paths obtained in this case sometimes do not satisfythe condition: each path is started from the initial state to a final state. Algorithm 1 showsa method for global path coverage by traversing TEFSM in the breadth first order. Thealgorithm will stop when all leaf nodes of the tree belong to a set of final state of TEFSM.To do this, for each current state (noted s), if s is not visited, we continue to build the treefrom the descendant states of s. On the contrary, if s has been visited and s is not a finalstate, the tree continues to be built by doubling the rest of TEFSM from s (see else conditionin the algorithm). Moreover, the loops can exist in a TEFSM followed by a defined conditionand the internal variables are usually used to control these loops. To guarantee the algorithmfinishes, we must update the data value of these variables to the condition becomes true afterthe n step. Finally, for each path, we only pick the input/output actions and these actionswill be decorated by the constraints from the conditions on this path (guard condition ontransition) to have a test purpose.

Page 64: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

50 Chapter 3. Testing Approaches for Web Services

Algorithm 1: Make the coverage tree from TEFSMInput : aut = (S, s0, F, V, D|V |V , Eτ , C, Inv, T) TEFSM specificationOutput: A treequeue← {s0};tree← ∅;while queue 6= ∅ do

s← queue.pop();

tList← getOutTransition(s)a //{t=(s <e,cond,[~v:=~x;R 7→0]>−−−−−−−−−−−−−−→ s′)∈ T|cond 6= false};

foreach t =(s <e,cond,[~v:=~x;R 7→0]>−−−−−−−−−−−−−−→ s′) in tList doupdate V by data update function of t;if s′ is not visited then

tree.add(t);queue.push(s′);mark s′ in visit;

else//double the rest part of TEFSM from s, it means that a new set oftransitions are added into T ;create a new state nstate;update target state of t by nstate;tree.add(t);queue.push(nstate);mark nstate in visit;tList′ ← getOutTransition(s′);foreach t′ in tList′ do

t′′ ← copy of t′ //create a new transition;update source state of t′′ by nstate //modify the reference from s′ tonstate;add t′′ into T ;

return tree;

aAt each moment, the condition of transition has three values: true, false and undefined when thecondition depends on input parameters. The getOutT ransition(s) function will return a set of transitionsthat have source state be s and condition evaluation be true or undefined.

Step 2: generating the abstract timed test case: We present here our approachto generate the timed test case from a TEFSM using a test purpose. Our approach firstlycomputes the synchronous product between a TEFSM and a test purpose. While computingthe synchronous product, we also consider the feasible paths (based on constraints on datavariables and on clocks) on this product to cut the unsatisfying path. Secondly, the test caseis generated by selecting a trace leading to the Accept state and with each output transition(noted t) on this trace, if other output transitions exist, that go out from the same state witht, and the intersection between the constraints of t with the constraints of these transitionsis not empty, the inconslusive states will be assigned to the target state of these transitions.

Page 65: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

3.3. Unit testing 51

Castanet et al [44] defined a synchronous product as following:

Definition 3.18. (Synchronous product): Let M and Tp be a TEFSM and a test purpose.The synchronous product of M and Tp is a TEFSM SP defined as follows.

• CSP = CM ∪ CTp;

• V SP = VM ;

• ESP ⊂ EM ;

• SSP ⊂ SM × STp;

• SSP , TSP and InvSP are the smallest relations defined by the following rules:

– Rule R0:

sSP0 = (sM0 , sTp0 ) ∈ SM × STp ∧ Inv(sSP0 ) = Inv(sM0 ) ∪ Inv(sTp0 )

– Rule R1:

(s1, s2) ∈ SSP ∧ s1<e1,cond1,u1>−−−−−−−−−→ s′1 ∈ TM ∧ s2

<e2,cond2,u2>−−−−−−−−−→ s′2 /∈ T Tp

⇒ (s′1, s2) ∈ SSP ∧ (s1, s2) <e1,cond1,u1>−−−−−−−−−→ (s′1, s2) ∈ TSP ∧ Inv(s′1, s2) = Inv(s′1)

– Rule R2:

(s1, s2) ∈ SSP ∧ s1<e,cond1,u1>−−−−−−−−→ s′1 ∈ TM ∧ s2

<e,cond2,u2>−−−−−−−−→ s′2 ∈ T Tp

⇒ (s′1, s′2) ∈ SSP ∧ (s1, s2) <e,cond1∪cond1,u1∪u2>−−−−−−−−−−−−−−−→ (s′1, s′2) ∈ TSP

∧Inv(s′1, s′2) = Inv(s′1) ∪ Inv(s′2)

When we test a SUT, a global clock alway exists that is real time. So, we can add a globalclock that is not reset while we execute a test case. The initial value of this clock at the startmoment of the test case is zero. Next, we need to map all local clocks to this global clockto compute an out reach time interval for each transition on real time. Let’s first define anoperator ⊕ on intervals such that: if I = [a, b] and J = [c, d] are intervals, then the sum ofI and J , noted I ⊕ J is defined as [a+c,b+d]. Berrada et al [25] define the out reach timeinterval over a global clock h as follows:

Definition 3.19. (Out reach time interval) Given a TEFSM M and a path ρ = t1, t2, ..., tnof M from the initial state. The out reach interval is the time interval outside of which thetransition must happen according to the global clock, noted h, in order to be in agreement withthe ρ time constraints. If a transition happens in this interval, we know that at least one ofthe constraints of ρ is violated. Formally we define:

h(t0) = [0, 0]

h(ti) = ∩x∈C(ti)h(tkx)⊕ SC(ti, x)

where x is last reset in tkx(kx < i), h(ti) the complementary of h(ti) is noted h(ti), andSC(ti, x) the constraints over x.

Page 66: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

52 Chapter 3. Testing Approaches for Web Services

For the computation of this interval of a transition t, for every clock x of t, we look fora transition t′ where x is last reset. If we find such t′, we know that x is reset during h(t′).Then, we add1 time constraint of x to h(t′). Finally, the complementary of the intersectionof all these intervals is the wanted interval.

Figure 3.7: Out reach time intervals

Example 3.4. Let ρ be the path as the figure 3.7, with c1 and c2 are the local clocks. Firstlywe add a new transition t0 in which all clock are reset: h(t0) = [0, 0].

• For t1, the clocks c1, c2 are reset in t0 then: h(t1) = h(t0)⊕ [0, 2] = [0, 2].

• For t2, the clocks c2 is reset in t1 then: h(t2) = h(t1)⊕ [3, 5] = [3, 7].

• For t3, the clocks c1 is reset in t2 then: h(t3) = h(t2)⊕ [0, 3] = [3, 10].

• For t4, the clocks c1 is reset in t3 and c2 in t1 then:

h(t2) = (h(t3)⊕ [0, 3]) ∩ (h(t1)⊕ [7, 10]) = [7, 12]

.

Because the behaviour of TEFSM depends on the constraints of time and data variables.So we continue to consider the constraints on data variables here. Similarly with the clock, wealso calculate the out reach data domain of each variable2 at each transition that it appears.

Definition 3.20. (Out reach data domain) Given a TEFSM M and a path ρ = t1, t2, ..., tnof M from the initial state. The out reach data domain of variable v at transition ti, i ∈ [1, n]is the intersection of the out reach data domain of variable v at the previous transition (noted

1using ⊕2We only consider the variables that depend on the input parameters

Page 67: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

3.3. Unit testing 53

tj , j < i) where the variable v appears and the intersection of the constraints on ti. Formallywe define:

Dv(ti) = Dv(tj) ∩ (∩CSv(ti))

where CSv(ti) is a constraint of variable v on transition ti and Dv(t0) is defined in theTEFSM.

Definition 3.21. (Feasible path) Let ρ = t1, t2, ..., tn is a path. ρ is called a feasible path iff∀ti, i ∈ [1, n]:

1. h(ti) is not empty.

2. ∀v ∈ CS(ti), Dv(ti) is not empty.

Figure 3.8: Synchronous product and abstract test case selection

Example 3.5. The figure 3.8 presents a synchronous product of TEFSM and a test purposewhere we applied the feasible path technique with variable v1 to cut the unsatisfying path atstate (4,2). We stop at state (2,2) because the input d is define one time in the test purposeand it has appeared on the current path. The abstract test case is the path from initial state(0,0) to accept state (6,3). While we select this path, at state (1,1), there are two output

Page 68: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

54 Chapter 3. Testing Approaches for Web Services

actions that may be produced after giving the input a. But the intersection between theconstraints of b and c is an empty set, so the inconclusive state is not added after outputaction c.

Like the introduction in section §2.3.4 of chapter §2, TGSE generates the timed testcases by simulating a Communicating System Under Test (CSUT) that is modeled by onetest purpose and some ETIOAs [27, 26] (i.e., Extended Timed Input/Output Automaton).Moreover, there is no supported tool for our method, so we used the TGSE as a supported toolfor our second step. But this tool does not allow us to add the inconclusive states into thetest case, it verifies only the reachability of each state on the test purpose. Before discussinghow to generate a test case by using the TGSE tool, we present the following definitions.

Definition 3.22. (ETIOA): An Extended Timed Input/Output Automaton (ETIOA) is a10-tuple M = (S,L,C, P, V, V0, P red,Ass, s0,→) where:

• S is a finite set of states.

• s0 is the initial state.

• L is a finite alphabet of actions.

• C is a finite set of clocks.

• P is a finite set of parameters.

• V is a finite set of variables.

• V0 is a finite set of the initial values for variables of V.

• Pred = Φ(C,P,V) ∪ P [P,V], where P [P,V] is a set of linear in equalities on V and P.

• Ass = x :=0 | x ∈ C∪v := f(P,V ) | v ∈ V is a set of updates on clocks and variables.

• → ⊆ S × L× Pred×Ass× S is a set of transitions.

Clocks and Constraints of ETIOA: For a set C of clocks, a set P of parameters, anda set V of variables, the set of clock constraints Φ(C,P, V ) is defined by the grammar:

Φ := Φ1|Φ2|Φ1 ∧ Φ2,Φ1 := x ≤ f(P, V ),Φ2 := f(P, V ) ≤ x

where x is a clock of C, and f(P, V ) is a linear expression of P and V .

Definition 3.23. (CSUT): A Communicating System Under Test (CSUT) is a 5-tuple CS= (SP, SV, R, Mi,1≤i≤n, TP) where:

• SP is a finite set of shared parameters;

• SV is a finite set of shared variables;

• Mi,1≤i≤n is a ETIOA ;

• TP is a ETIOA representing the test purpose;

Page 69: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

3.3. Unit testing 55

• R is a finite set of synchronization rules where each rule ~r is a vector of n+1 elements;

A CSUT declares a set of shared resources (parameters and variables), a set of ETIOAs, aset of rules describing the different possible synchronizations between the entities, and a testpurpose modeled by an ETIOA. The figure 3.9 shows an example of CSUT.

Figure 3.9: A CSUT model

In order to generate test cases for web services using the TGSE tool, firstly the testpurpose must be defined and model its by an ETIOA (the ETIOA of test purpose does notinclude a set of finite parameters). Secondly, transforming the TEFSM specification into theETIOA by deleting the set of time invariants. For each variable that is updated by data valueof input event and is not shared with variables of test purpose, a correspondent parameter willbe declared. Thirdly, we build a CSUT from the ETIOA specification of web service undertest and the ETIOA representing test purpose by declaring a set of synchronization rules,shared variables and shared parameters. Finally, using TGSE to simulate this CSUT, a testcase in the XML format satisfying the test purpose will be returned if it is found. Else, thereis nothing. The values of parameters are randomly generated and saved in the OutputLp.outfile if they are declared. These values are used as the conditions of input events when wegenerate its data value on the next step. An experience on a case study is showed in theappendix part (see §A.3) as the tutorial of this tool.

Step 3: deriving the concrete test cases: after abstract timed test cases are gener-ated, the data and time delay are generated for each input message to build the concrete testcases. The test data generation based on data type (XML Schema) analysis of input messageis defined in WSDL file of web services. The XML schema syntax defines two message types:simple type and complex type. Complex data type defines a composition of simple and/orcomplex data types. It defines the "sequence", "choice", and "all" relationship among memberdata types. Different types can be combined to aggregated data types. Here, we discuss how

Page 70: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

56 Chapter 3. Testing Approaches for Web Services

to generate the data of the input messages using its data type and its set of constraints. Witheach simple type, we have a set of default configurations, for example type "int" is declaredbelong to [min, max] or default values belong to an enumeration {1,3,5,8,10}. The basicalgorithm to generate the data is as follows:

1. Simple type: randomly generate a value corresponding its data type definition, usingeither default configuration or constraints on the abstract test case.

2. Complex type: the generator recursively analyzes the structure of the data type untilit reaches the simple type:

(a) Analyze the hierarchical tree structure of a complex data type.(b) Traverse the tree, and at each tree node:

• If it is a simple data type, perform simple data type analysis.• If it is a "sequence" structure node, generate a set of test cases with each testcase corresponding to an ordered combination of the child nodes.• If it is a "choice" structure node, generate a set of test cases with each testcase corresponding to one child node.• If it is an "all" structure node, generate a set of test cases with each test casecorresponding to a random combination of the child nodes.

The time delay for the input messages is randomly generated from a region of constrainton clock. If there is no time constraints, the message will be sent immediately.

Figure 3.10: Data generation example

Example 3.6. The figure 3.10 shows an example of automatic data generation for thereserveRequest message that has data type on the left side. The cusName field is pickedfrom an enumeration and other fields are random. This message is generated using the SOAPdocument binding type.

Page 71: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

3.3. Unit testing 57

Test execution

We present in this section an algorithm to execute a timed test case for web services using acentralized test architecture (section 3.3.1). This algorithm will be executed by the controllerand uses xtioco to produce the verdict. The functions of send_to_SUT are: indicate anevent belonging to which tester and sends it to the correspondent tester. When the testerreceives an output from SUT, it sets the latter into a global queue. The function receivein this algorithm will either return the first message of global queue or raises a timeout ifthis queue is empty after a duration. A supported tool, named BPELUnit, that can use toexecute a timed test cases is introduced in section A.4. But this tool does not allow us tosynchronize between partner services.

Algorithm 2: Test executionRequire: max: is synchronous request timeout;Input : TC: a timed test caseOutput : pass/fail or inconclusivestate← sTC0 ;while state /∈ STCP do

switch state belong to docase STCI

tran← {t = (s ...→ s′) ∈ TrTC |state = s};//delay a duration before sending to SUT if time delay is indicated (d>0);delay d=time(tran) units of time;send_to_SUT(tran.action());

case STCDtran← {t = (s ...→ s′) ∈ TrTC |state = s};delay d=time(tran) units of time;

case STCOtry

d← (invariant(state) = undefined) ? max : invariant(state);oMsg ← receive(d);tran← {t = (s ...→ s′) ∈ TrTC |state = s ∧ oMsg = t.action()};if tran = ∅ then

return fail;else if tran.target()∈ STCU then

return inconclusive;

catch timeout after d units of timereturn fail;

state← tran.target();

return pass;

Page 72: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

58 Chapter 3. Testing Approaches for Web Services

Example 3.7. The figure 3.11 shows an example of test execution scenario for xLoan ser-vice. The test architecture has one controller and three tester (a client and two partners).A timed test case t = {?request →!approveReq →?approveRes →!response → delay =60s →!cancelReq} where ?request and !response are request/response message from client,!approveReq and ?approveRes are request/response from SUT to bank service and !cancelReqrequest message from SUT to bank service. The test execution scenario of this test case isexecuted as follows: Firstly, the controller sends an input ?request to Tester1 (client), thenit receives a request !approveReq from Tester2 (bank service) after 2 units of time. The con-troller continues to send a response ?approveRes to Tester2 and receives a response !responsefrom Tester1 (client) after 3 units of time. Afterwards, it delays 60 seconds to wait a requestfrom Tester2 (bank service). When the request !cancelReq arrives, the verdict pass is pro-duced for this test case and the test process finishes. In this test case, the Tester3 (assessmentservice) is not used.

Figure 3.11: An example of test execution scenario

3.3.4 Online approach

The problem of the approach that we introduced in section 3.3.3 is the explosion of statebecause we must pass all states of TEFSM while generating a set of test cases. Online testingis an approach in which test case generation and test execution are run in parallel. Thisapproach solves the problem of offline approach by randomly selecting the test case. But thisapproach does not guarantee that all abstract test cases are found. From an initial state,only a single test primitive (input event) is generated from the model at a time which isthen immediately executed on the system under test (SUT). Then the output produced bythe SUT, as well as its time of occurrence, is checked against the specification. At a time,a new test primitive is produced based on values of previous events or random selection, ifthere is some acceptable options and so on forth until we arrive a final state. Some tools weredeveloped from this approach as Jambition, T-Uppaal, TorX.

Page 73: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

3.3. Unit testing 59

Online testing algorithm

In this section, we proposed an online testing algorithm for web services based on TEFSMmodel. This algorithm is executed by the controller in the test architecture (Fig. 3.2) to gener-ate timed test case, execute this timed test case and assign the verdict using the conformancerelation xtioco (section 3.3.2). The main idea of this algorithm is: from a current state (atthe start of algorithm, the current state is the initial state), a list of following actions is foundbased on the current data of variables. In the case that this list is empty, it means that wehave arrived at a final state, a trace was found and there is no fault. We reset the currentstate to be the initial state and continue to execute another trace. On the other hand, thislist is composed of either the input actions (noted I), delay action (noted T ) or the outputactions (noted O). If the list of input actions and delay actions is not empty (I ∪ T 6= ∅), werandomly choose an action (noted t ∈ I ∪ T ) from this list3. If t ∈ I, we generate randomlythe data for this action using the algorithm in the section 3.3.3, send it to SUT, update datavariables and add this action to current trace. Else (t ∈ T ), we make a delay to synchronizethe time with SUT, and add this time delay to the current trace. In the case of I ∪ T = ∅(it means that O 6= ∅), we wait for a duration of time to receive the outputs from SUT andcheck it against the specification. If there is no output after a duration of time, a timeoutfault is assigned to the current trace. Finally, after each action (input, output or delay), thecurrent state is updated by the target state of the correspondent transition of action. Thealgorithm will be finished when it generates a number of trace (input parameter) or finds afault. The following figure 3.12 shows an illustration of this approach.

3In our model, we consider a timestamps transition as an action that is controlled by tester, so at the anycurrent state, (I ∪ T ) ∩O = ∅

Page 74: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

60 Chapter 3. Testing Approaches for Web Services

Figure 3.12: Online testing approach illustration

Page 75: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

3.3. Unit testing 61

Algorithm 3: Online testing algorithmRequire: max: is synchronous request timeout;Input : aut: TEFSM specification, nb is the trace number.Output : List of traces and corresponding verdictstate← s0;counter ← 0;traceList← ∅ //traces list;trace← ∅ //the current trace;queue← ∅ //the queue to store the outcoming messages of SUT;while counter < nb do

nActions← getNextAction(state) //see algorithm 4;//arrive a final state;if nActions = ∅ then

if trace 6= ∅ thentraceList.add(trace) //the pass verdict is assigned to current trace;trace← ∅;

state← s0;counter + +;

elseinputList← getInputTrans(nActions);timeList← getElapseTrans(nActions);if (itList← inputList ∪ timeList) 6= ∅ then

randomly choose t ∈ itList;if t ∈ inputList then

iMsg ← generate_data(t.action);send_to_SUT(iMsg);update_variable(iMsg);trace.add(iMsg);

elsedelay d←time(t) times units;trace.add(delay=d);

state← t.target // move to new state;else

d← (inv(state) = undefined) ? max : inv(state);sleep for d time units or wake up if queue is not empty at d′ ≤ d;if (oMsg ← queue.pop()) is not null then

if verify(oMsg)=true thenupdate_variable(oMsg);trace.add(oMsg, d’) //add output and its time delay into trace;state← t.target //move to new state;

elseexit() //receive the unexpected output (fail);

elseexit() //time constraint does not satisfy (fail);

return traceList;

Page 76: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

62 Chapter 3. Testing Approaches for Web Services

Algorithm 4: Get outgoing transitions from the current statefunction getNextAction(state)begin

result← ∅; // transitions list;

queue ← {t=(s <e,cond,[~v:=~x;R 7→0]>−−−−−−−−−−−−−−→ s′)∈ T|s = state ∧ cond = true};while queue 6= ∅ do

trans← queue.pop();if trans is an input/output transition or a timestamps transition then

result.add(trans);else

update the variables by data update function of trans;

temp ← {t=(s <e,cond,[~v:=~x;R 7→0]>−−−−−−−−−−−−−−→ s′)∈ T|s = trans.target ∧ cond = true};queue.push(temp);

return rerult;

3.4 Integrated testing

A composite of web services is a system integrated at runtime with its results dependingon its partners or the condition of the real environment. Unit testing on each service is notenough to access about quality of a composite while the works on integrated testing eitherare not many or focus on WSDL specification. Integrated testing can be performed to verifytiming constraints on transmission messages between services under test and its partners onthe real environment, this service is either ready to use or not, etc. In general, we can usepassive testing by verifying an execution trace of a web service orchestration. But we knowthat the passive testing is a method which tester does not interact directly to service undertest (SUT). This makes us spend a long time to collect the execution trace because we donot know when a session is started and when it finishes. To improve this problem, we caninteract directly with SUT by sending the input requests to start a new session (active), thencollecting the trace to analyze (passive).

3.4.1 Methodology

The figure 3.13 shows an overview of our proposed method. It is composed of threesteps: first of all, the orchestration specifications (for example: UML, BPMN, BPEL, etc.)must be translated into a formal model, (i.e., TEFSM). This model will be used to checkthe correctness of a trace that is generated by step 2. In step 2, a SOAP request will beautomatically generated and sent to SUT to enable a new session. The content of this messagewill be randomly generated based on XML schema of message type, ( see the derivation ofconcrete test cases part of section §3.3.3 for more detail). Afterwards, all communicatedmessages between SUT and its partners (including tester) will be immediately collected byinstalling a probe to build an execution trace. In the figure 3.13, the notion Point of Controland Observation (PCO), and Point of Observation (PO) are introduced as a probe. The timeoccurrence of each message is also saved. Finally, this trace is verified by a checking engineto produce the verdict (step 3).

Page 77: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

3.4. Integrated testing 63

Figure 3.13: An overview of integrated testing approach

3.4.2 Checking algorithm

This section presents an algorithm to check the correctness of an execution trace with itsspecification (i.e., a TEFSM). This algorithm checks message by message and produces theverdict for each message. We know that each session has a correspondent specification. Inthe case of multi-sessions are executed in parallel, we must check on multi-specifications. Ateach specification, its behaviour depends on data variables. To check a message conform toits specification, we need a current state, the time occurrence of last message and a list ofvariables with current values. To start a new session, the current state is the initial state ofTEFSM, the time occurrence of the last message is undefined and data variables receive theinitial values. We call each trio: current state, the time occurrence of last message and listof variables is a ConfigState C = (sc, tc, Vc). To verify multi-session, we will have a globalset of configuration SC = C1, C2, ..., Cn where each Ci is correspondence with a session. Itsvalues will be updated whenever a message arrives. The algorithm 5 receives a single messageincluding time occurrence, verifies it and produces a verdict true/false. This allows us torun the checking engine in parallel with trace collection engine because the algorithm checksthe correctness of the message without needing the correlation between the messages at anymoment.

Page 78: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

64 Chapter 3. Testing Approaches for Web Services

Before introducing the detail of algorithm, we discuss some necessary functions that areused by the algorithm:

• getNextEvent(ConfigState C = (sc, tc, Vc)): this function returns either a set of nextinput/output transitions of current state sc based on current values of Vc or null if wearrive a final state. An input/output transition is a transition which its event belongsto EI ∪ EO. While we query the next input/output transitions, the internal variablesmay be updated by the data update function of transition.

• getMessageName(String msgContent): this function returns message name from itscontent by mapping the structure of message with XML schema that is defined inWSDLs.

• update_variable(V ector V, String msgName, String msgContent): this function willfind a variable which its type is msgName from the list of variables and updates itsvalue by msgContent.

Page 79: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

3.4. Integrated testing 65

Algorithm 5: Checking algorithm from TEFSMRequire: CS: is the global list of StateConfig

aut: is a TEFSM specificationInput : message msg, arrival time at.Output : true/false//get message name from its content;msgName← getMessageName(msg);foreach C ← (s, t, V ) in CS do

//get next input/output action from current state;transList← getNextEvent(C);//check: we arrive either a final state or not;if transList = null then

remove C from CS //a session finishes;else

foreach trans in transList doif trans.eventName() = msgName ∧ at-t satisfies guard condition on clockof trans then

updateVariable(V, msgName, msg);update current state of C by trans.getTarget();update time occurrence of last message by at;return true;

//check a new session;C ← (aut.getInitialState(), undefined, aut.getVariables());transList← getNextEvent(C);foreach trans in transList do

//a new session is enabled;if trans.eventName() = msgName then

updateVariable(V, msgName, msg);update current state of C by trans.getTarget();update time occurrence of last message by at;add C into CS;return true;

return false;

Page 80: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

66 Chapter 3. Testing Approaches for Web Services

3.5 ConclusionWe have presented in this chapter some testing approaches for web service orchestration.

We have proposed an unit testing framework that composes of a centralized test architecture,a conformance relation and two testing methods: offline and online. Our testing approachcomposes of five steps: (i) modelling the web service orchestration by TEFSM, (ii) generatingtest purpose, (iii) generating the abstract timed test case, (iv) deriving the concrete timedtest case, (v) executing the timed test case against implementation under test to produce theverdict. For offline method, these activities are applied in sequential while online method,they are applied in parallel.

• For offline approach: we presented in this chapter a method to generate the abstracttimed test case by computing the synchronous product (SP) between a TEFSM and atest purpose. While computing the SP, the feasible path technique is applied to cut theunsatisfying path of SP. Finally, the abstract timed test case is selected by tracing apath to accept state and decorating it by the constraints on the path.

• For online approach, the input and its data are randomly selected from a current stateand randomly generated to send to SUT. Afterwards, we wait for the output and verifyits correctness using the input data and the sending time moment of input. If the outputis correct, we continue with other input until arriving at a final state, otherwise we stopand a fail verdict is produced.

• An integrated testing approach is also proposed to test an orchestration with its realpartners. For this approach, the data of request from client is randomly generated andsends to SUT to enabled a new session. Then all input/output messages of SUT iscollected to build a execution trace. Finally an algorithm is proposed to check thistrace.

In the next chapter, we will propose a new approach to check the correctness of a sequenceof messages. It composes of the rule syntax, a trace collection architecture and an checkingalgorithm that checks message-by-message without storing them.

Page 81: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

Chapter 4

Runtime Verification

Contents4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 674.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

4.2.1 Test architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 694.2.2 The rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 704.2.3 Checking algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

4.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

This chapter presents a new methodology to perform passive testing of behavioural con-formance of a system based on a set of constraints of input/output messages (noted rules).Firstly, we define the passive testing architectures for web services. Secondly, the Nomadlanguage is extended to define the rules. Finally, an algorithm is proposed to verify thecorrectness of service under test using the execution trace. The proposed algorithm can beused either to check a trace (offline checking) or to runtime verification (online checking)with timing constraints, including future and past time. The problem of data correlationbetween messages is also considered. The works of this chapter have been published in theICWS’2010 [43].

4.1 IntroductionThe activity of conformance testing is focused on verifying the conformity of a given

implementation to its specification. In most cases testing is based on the ability of a tester thatinteracts directly with the implementation under test and checks the correction of the answersprovided by the implementation (called: active testing). Several active testing methods forweb services are proposed in the chapter 3. However, we cannot apply this method to testa running system, in many cases. For example, if we use the active method to test thefunction create_new_account of a bank service, this will make a mistake in the database ofthe service. With a composite web service, we can only use the method of active testing forunit testing by simulating its partners to guarantee that does not effect to its partners whiletesting. But the composite web service is a system that is integrated at runtime and its resultdepends on its partners or the real environment. In this case, the passive testing method is

67

Page 82: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

68 Chapter 4. Runtime Verification

used to verify the result of partner services or the interval time between a message requestand a message response. Passive testing is a method that collects the observable traces ofthe system by installing a probe and analyzes it to produce a verdict. This method does noteffect running system.

There are two approaches of passive testing, online and offline. The online approachimmediately checks an execution trace whenever an input/output event occurs. The advan-tages of this approach are: the faults may be found immediately and, we can stop the systemto avoid the damage. On the contrary, the offline approach checks an execution trace afterit is collected for a period of time, meaning the error is not found immediately if it occurs.This approach does not require added resources such as CPU, RAM or another computer torun the trace collection engine and checking engine in parallel. Depending on the concretecase, we can apply the online approach or offline approach to verify the conformance of thesystem.

The rule is the constraints on the order of messages and/or on data of these messages.We can understand a rule on a natural language as follow: if a message M1 occurs (maybe including the constraints on data) then a message M2 (or a suite of message SM2) mustoccur before/after M1 for a period of time. A generic temporal logic (like LTL) is usuallyused to define the constraints on the order of messages for model checking engine. Modelingconstraint is required to specify permissions and prohibitions. However, this is generally notsufficient to express constraint properties such as availability and obligations must be alsoconsidered. In contrast to permissions and prohibitions, obligations are often associated withdeadlines to specify bounded time availability requirements. In this case, a violation onlyoccurs if the obliged action is not performed before the deadline. These timing constraints donot consider the LTL. Cuppens et al [51] have defined a formal security model with deadlinescalled Nomad that helps us to solve this problem.

In many cases, for each rule, we do not check all messages of a trace. We only pick somemessages of a trace that satisfy some fixed conditions (for example, msg.username=’xyz’) orexist a correlation between these messages by its data values, then our rule is applied onthis set of messages. For example, there are many sessions that are executed in parallel andeach message has a sessionId field. Afterwards, we need to group the messages belongingto a session by using sessionId field of the messages before applying our rule to check thecorrectness. This is called data correlation.

In our works, we have proposed to extend the Nomad language, by defining the constraintson each atomic action (fix conditions) and a set of data correlations between the actions, whichis more convenient to use than LTL (for our objective) to define the rules for web services. Wechose this language because it provides a way to describe permissions, prohibitions that aregranted (they are applied immediately) and obligations (needing a time duration to complete)related to non-atomic actions within contexts that take time constraints. Moreover, its syntaxand our natural language are quite similar. But we only support permissions and prohibitionsby adding the timing constraint that is bounded by a duration with time min and time max.We also consider the problem of data correlation between messages of a rule.

Page 83: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

4.2. Methodology 69

4.2 MethodologyPassive testing (or Runtime verification) method is composed of three steps:

1. define the passive testing architecture to collect the execution traces on a running sys-tem.

2. define the rules that are applied to verify the correctness of the execution traces.

3. analyze (online or offline) the execution traces to produce the verdict.

This verdict is pass if the system trace respects the specified constraints and fail if itdoes not. In previous works, the inconclusive verdict is possible when the tester cannotextract the necessary information from the execution trace because it is too short. But inour works, this verdict is not admitted because we return the verdict pass or fail at eachmessage. The methodology which is presented in this section has been fully implemented inthe tool RV4WS. A detailed description of this tool is introduced in section §5.3 of chapter §5.

4.2.1 Test architecture

In this section, we introduce two architectures for the trace collection based on the notionPoint of Observation (PO) of ISO standard 9646 [3], one for web service based and anotherfor web service composition. Our trace collection architectures are shown in figure 4.1. Witha web service based, testing approach is black-box. It means that we can only collect thecommunicating SOAP messages between the client and the service under test by setting aPO between them. With a web service composition, we can also test the communicationbetween a web service and its partners. So that, to collect all input/output messages of aweb service composition, each connection between the service and its partner will be set aPO. All messages that are collected by the POs and its correspondent occurrence time willbe sent to a checking engine to analyze and produce the verdict.

Figure 4.1: Trace collection architecture for web services

Page 84: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

70 Chapter 4. Runtime Verification

4.2.2 The rules

In our works, we consider each message as an atomic action. We use one or severalmessages to define a formula. When we define a formula, the constraint on message parametersvalue may be considered. Finally, from these formulas, the rule is defined in two parts:supposition (or condition) and context. The set of data correlations are included as anoptional.

Definition 4.1. (Atomic action): We define an atomic action as one of following actions:an input message, an output message. Formally:

AA := Event(Const)

where:

• Event represents an input/output message name;

• Const := P ≈ V |Const ∧ Const|Const ∨ Const where:

– P are the parameters. These parameters represent the relevant fields in the mes-sage.

– V are the possible parameters values.– ≈ ∈ {=, 6=, <,>,≤,≥}.

Definition 4.2. (Formula): A formula is defined recursively as following:

F := start(A) | done(A) | ¬F | F ∧ F | F ∨ F | Od∈[m,n]F

where

• A is the atomic action.

• start(A): A is being started.

• done(A): A has been finished.

• Od∈[m,n]F : F was true d units of time ago if m > n, F will be true d units of time ifm < n, where m, n are two natural numbers.

Definition 4.3. (Rule): If α and β are formula then R(α|β) is a rule where R ∈ {P:permission; F : Forbidden;}. The constraint P(α|β) (resp. F) means that it is permitted(resp. prohibited) to have α true when context β holds.

Example 4.1. Rule defined examples:

1. We only allow to create a new account on the services if we have had successfully loginwithin maximum one day ago and have not logged out.

P(start(createAccountReq)|Od∈[1,0]Ddone(loginRes) ∧¬done(logoutReq))

Page 85: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

4.2. Methodology 71

2. In the case of web service composition, we can define a rule to verify the interval time(i.e. 10 seconds) between a request message and a response message from a partnerservice, to assess about the successful rate, if this partner is installed on a far host.

P(start(msgReq)|Od∈[0,10]Sdone(msgRes))

In some cases, we need to check the correctness of the messages from a message sequenceby the groups. For example, many sessions are executed in parallel, the data correlationsbetween messages are necessary to validate the messages belonging to a session by its datavalues. In this case, a rule is also extended with a set of data correlations to verify thisproblem.

Definition 4.4. (Data correlation): A data correlation is a set of parameters that have thesame data type where each different parameter represents a relevant field in a different messageand the operator = (equal) is used to compare a parameter with others. A data correlation isconsidered as a property on data.

Example 4.2. Let A(pA0 ,pA1 ), B(pB0 ,pB1 ,pB2 ) and C(pC0 ) are messages with pi are the parame-ters where pA0 ,pB0 ,pC0 have the same type. A data correlation set that is defined based on A,B and C is: {pA0 ,pB0 ,pC0 } ⇔ {pA0 = pB0 = pC0 }

Definition 4.5. (Rule with data correlation): Let α and β are formula, CS is a set of datacorrelations based on α and β (CS is defined based on the messages of α and β). A rule withdata correlation is defined as: R(α|β)/CS where R ∈ {P: permission; F : Forbidden;}. Theconstraint P(α|β) (resp. F) means that it is permitted (resp. prohibited) to have α true whencontext β holds within the conditions of CS.

Example 4.3. We can re-define the rules in example 4.1 with the data correlation as follows:

1. we use sessionId to indicate the messages belonging to a session.

P(start(createAccountReq)|Od∈[1,0]Ddone(loginRes) ∧ ¬done(logoutReq))/{{createAccountReq.sessionId, loginRes.sessionId, logoutReq.sessionId}}

2. we validate the send/receive messages on the same machine (msgReq.sourceIp = ms-gRes.destIp).

P(start(msgReq)|Od∈[0,10]Sdone(msgRes))/ {{msgReq.sourceIp,msgRes.destIp}}

4.2.3 Checking algorithm

In this section, we briefly outline the computation mechanism used to determine whethera rule holds for some given input/output sequence of events. Our algorithm determinesmessage-by-message the conformity with each rule without storing the message sequence.Here, we use two global variables: currlist is a list of current rules that were enabled andrulelist is a list of rules that are defined to verify the system. Before introducing the detailof algorithm, we present some functions to compute on the context of each rule:

Page 86: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

72 Chapter 4. Runtime Verification

• update: this function updates the value of context whenever a message arrives and thismessage exists in the context. For example, the context of a rule is loginResponse∧¬logoutRequest. When the loginResponse message arrives, this context is updatedas true ∧ ¬logoutRequest.

• evaluate: this function evaluates whether a context of rule is holds (true) or not. Thisfunction returns one of three values: true, false or undefined if a message that isnot updated exists. While the evaluation, a message with the function not will beassigned provisionally is true. For example: at the time of evaluation, the expressiontrue ∧ ¬logoutRequest will be evaluated as true ∧ true = true.

• correlation: this function will return one in three values: undefined, true and false.undefined when a message msg is not defined in the set of data correlations of the rule.In the case that the msg is defined in the set of data correlations of rule, this functionwill query the correspondent value and compare it with the value of previous messagesto return true/false.

• contain: to find a message in the context of a rule. This function returns true ifthe message msg is found in in the context of a rule and its condition is validated.For example, the context of the rule is the following expression: msgA[msgA.id =5]&msgB. When the msgA (with its value id=4) arrives, the contain function will bereturned as false because the message name is found but its condition does not satisfy.In the case of msgB arrives, this function returns true.

As said earlier, there is two types of rule: future time and past time. To make this moreclear, we will analyze the checking algorithm for each type.

Rule with future time

We know that each rule has two parts: the supposition part and the context part. The rulewill be validated if its supposition was enabled and its context is hold (true). In a rule withfuture time, the context part will happen after its supposition was enabled. Our algorithmhas two steps:

• Step 1) Each time that a message (called msg) arrives, we have a list of current rules(currlist) that have been enabled to wait the validation of its context. Therefore, wewill firstly update the context of the current rule (noted rule) in this list (currlist) ifmsg appears in the context of rule and data correlation of msg satisfies if it is defined.Secondly, we will evaluate the context of each rule. If the context is true and the timeconstraint is satisfying, a verdict pass/fail, depending on the permission/prohibitionof the rule, will be given in time msg arrives, and will remove this rule from currentlist (currlist). If we cannot evaluate the context, we will wait for the next message tocomplete the context. In this case, a pass verdict is given.

• Step 2) we will examine all rules in rulelist and enable it (add into currlist) if itssupposition part contains the message msg and condition of supposition part is validwith the data of msg. When we enable a new rule, the properties of data correlationset will be assigned by the values that are queried from msg.

Page 87: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

4.2. Methodology 73

Rule with past time

In a rule with past time, the context part will happen before its supposition is enabled. Itmeans that the context part must be completed and the evaluate function must return trueor false, when its supposition is enabled. As the future time, we have also two steps:

• Step 1) we check firstly in the list of active rules (currlist). If its supposition partcontains the message msg and the condition of the supposition part is valid with thedata of the msg and the data correlation of the msg satisfies if it is defined. We willevaluate its context to give a verdict. On the contrary, we will check the time constraintson the rules to remove it from the list (currlist) if the time constraints do not satisfy.If the context of the rule contains this message (msg), we update the context to waitthe next message.

• Step 2) we will examine all rules in the rulelist and enable it (add into currlist to waitthe message in the supposition part) if its context contains the message msg. Like thefuture rule, the properties of data correlation set will also be assigned by the queriedvalues of msg.

Finally, we combine it to have a complete algorithm. The detail of main checking algorithmis shown in algorithm 6. This algorithm verifies message-by-message and returns the verdictat a time of arrival message.

Page 88: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

74 Chapter 4. Runtime Verification

Algorithm 6: Runtime verification algorithmRequire: currlist is the list of current rules that were enabled,

rulelist is list of rules that are defined to verify the system.Input : message msg, occurrence time t.Output : true/falseres← true;list← ∅; //a list;//step 1: check in currlist to give a verdict;foreach rule in currlist do

//if a rule is enabled many times, we consider only one time (i.e. one session);if rule.id /∈ list then

if rule is future time thenres←verify_future(rule, msg, t, res);

elseres←verify_past(rule, msg, t, res);

list.add(rule.id);

//step 2: check in rulelist to enable new rule;foreach rule in rulelist do

if msg ∈ rule.supposition() ∧ rule.condition(msg)= true thenif rule is future time then

r1← rule; //create a new rule;r1.active_time← t; // set active time;r1.assignValue4Properties(msg);currlist.add(r1); //add into enabled list;

else if rule.correlation(msg)6= false ∧ rule.evaluate()! = true ∧rule.id /∈ list then

res← false;

else if rule is past time ∧ rule.id /∈ list ∧ rule.context.contain(msg) thenr1← rule; //create a new rule;r1.active_time← t; // set active time;r1.update(msg) //update context;r1.assignValue4Properties(msg);currlist.add(r1); //add into actived list;

return res;

Page 89: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

4.2. Methodology 75

Algorithm 7: verify_future(rule, msg, t, result)Require: currlist: is a global variableInput : rule: a rule, msg: a message, t: occurrence timeOutput : true/falseif verifyTime(t, rule.active_time) = false ∧ rule.type =′ P ′ then

result← false;currlist.remove(rule);

else if r.context.contain(msg) ∧ rule.correlation(msg)6= false thenrule.update(msg) //update context;if rule.evaluate() = true then

currlist.remove(rule);if rule.type =′ F ′ ∧ verifyTime(t, rule.active_time) = true then

result← false ;

else if rule.evaluate()=false thencurrlist.remove(rule);if rule.type =′ P ′ then

result← false;

return result;

Algorithm 8: verify_past(rule, msg, t, result)Require: currlist: is a global variableInput : rule: a rule, msg: a message, t: occurrence timeOutput : true/falseif msg ∈ rule.supposition() ∧ rule.condition(msg)=true ∧rule.correlation(msg)6= false then

currlist.remove(rule);if rule.evaluate() = true then

if rule.type =′ F ′ ∧ verifyTime(t, rule.active_time) = true thenresult← false;

elseif rule.type =′ P ′ then

result← false;

elseif verifyTime(t, rule.active_time) = false then

currlist.remove(rule);else if rule.context.contain(msg) ∧ rule.correlation(msg) 6= false then

rule.update(msg);

return result;

Page 90: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

76 Chapter 4. Runtime Verification

Example 4.4. For example, we have an execution trace with the message name and its timeoccurrence: MS ={(a0,0), (a1,2), (a0,3), (b1,8), (b0,9), (a1,12), (b2,15), (c0,16)}.

The rules that are defined to assess the system are:

• r1 = P(start(a0)|Od∈[0,10] done(b0) ∨ done(c0)),

• r2 = P(start(b1)|Od∈[+∞,0] done(a1) ∧ ¬done(c1)) and

• r3 = P(start(a2)|Od∈[0,10] done(b2).

The table 4.1 shows result of the algorithm when we applied the set of rules {r1, r3, r3}to verify the execution trace MS.

message enabled rule list verdict add/remove (+/-)(a0, 0) {r+

1 = P(true|Od∈[0,10]done(b0) ∨ done(c0))} true +r1(a1, 2) {r1 = P(true|Od∈[0,10]done(b0) ∨ done(c0)); true +r2

r+2 = P(start(b1)|Od∈[+∞,0]true ∧ ¬done(c1))}

(a0, 3) {r1 = P(true|Od∈[0,10]done(b0) ∨ done(c0));r2 = P(start(b1)|Od∈[+∞,0]true ∧ ¬done(c1)); true +r1r+

1 = P(true|Od∈[0,10]done(b0) ∨ done(c0))}(b1, 8) {r1 = P(true|Od∈[0,10]done(b0) ∨ done(c0)); true -r2

r1 = P(true|Od∈[0,10]done(b0) ∨ done(c0))}(b0, 9) {r1 = P(true|Od∈[0,10]done(b0) ∨ done(c0))} true -r1(a1, 12) {r1 = P(true|Od∈[0,10]done(b0) ∨ done(c0)); true +r2

r+2 = P(start(b1)|Od∈[+∞,0]true ∧ ¬done(c1))}

(b2, 15) {r2 = P(start(b1)|Od∈[+∞,0]true ∧ ¬done(c1))} false* -r1(c0, 16) {r2 = P(start(b1)|Od∈[+∞,0]true ∧ ¬done(c1))} true

Table 4.1: An example of runtime verification

*at the message (b2, 15), we receive a false verdict because rule r1, which has the lastenabled message is (a0, 3) and valid for a duration 10 unit of the time, is fail at the time 15.

4.3 ConclusionIn this chapter, we proposed a new methodology to perform passive testing of the web

services based on the rule specification. We defined the trace collection architectures andintroduced an algorithm that checks the correctness of a sequence of messages (i.e., SOAPmessages) with a set of rules. Importantly, this algorithm allows us to check a sequence ofmessage in parallel with a trace collection engine by verifying message by message withoutstoring them. The time constraints, including future and past time, are also supported on therules to verify the interval time between messages. In our approach, we allow to apply therule on only some messages that satisfy some fix conditions or on the groups of messages thathave the data correlation. The implementation of this approach, the tool RV4WS, the onlinetesting approach of chapter §3 and the tool WSOTF, are presented in the next chapter.

Page 91: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

Chapter 5

Implementation

Contents5.1 A case study: Product Retriever . . . . . . . . . . . . . . . . . . . 775.2 WSOTF tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

5.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 805.2.2 Experimentations and results . . . . . . . . . . . . . . . . . . . . . . 83

5.3 RV4WS tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 855.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 855.3.2 Experimentations and results . . . . . . . . . . . . . . . . . . . . . . 88

5.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

In the context of the WebMov project, some tools are developed. This is also the solu-tion to demonstrate these theories presented in chapter §3 and §4. In this chapter, we willintroduce two tools (WSOTF: Web Service, Online Testing Framework and RV4WS: Run-time Verification for Web Services). WSOTF is implemented based on the online approach(section §3.3.4) that can be used to test a web service based or to unit test an orchestration.RV4WS uses the theories in chapter §4 that allowed us to verify a web service at runtime.For each tool, we show an experimentation result based on a real-life case study.

5.1 A case study: Product RetrieverIn this section, we present a real-life case study, named Product Retriever [115], of Web-

Mov project. This case study is a BPEL process that allows users to automate part ofthe purchasing process. It enables you to retrieve one searched product sold by a preau-thorised provider. The search is limited by specifying a budget range and one or morekeywords characterizing the product. The searched product is done through the opera-tion getProduct and the parameter RequestProductType that is composed of informationabout the user (firstname, lastname and department) and searched product (keyword, maxprice, category). This process has 4 partner services, named AmazonFR, AmazonUK,CurrencyExchange and PurchaseService that are developed by Montimage1 and available

1http://www.montimage.com/

77

Page 92: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

78 Chapter 5. Implementation

at http://80.14.167.59:11404/servicename, and its overview behaviour, illustrated infigure 5.1, is described by the following:

1. Receives a message from the client with the product and keywords of the characteristicsof the product.

2. Contacts the PurchaseService partner to obtain the list of authorized providers forthat product. In a case where there is no authorized provider, an announcement willbe sent to the client by a fault message response.

3. Depending on the authorized provider result, the process contacts either the AmazonFRor the AmazonUK service to search a product that matches the price limit by Euro andthe keywords.

4. Sends back to the client the product information and the name of the provider wherethe product was found and a link to where it can be ordered. If a matching product isnot found, a response with unsatisfying product will be sent back to the client.

5. After receiving the product information, the client can send an authorization requestto confirm the purchase of the product within a certain duration (i.e., one minute) oftime.

The Product Retriever service is built in Netbeans 6.5.1 and deployed by a Sun-Bpel-engine within a Glassfish 2.1 web server.

Page 93: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

5.1. A case study: Product Retriever 79

Figure 5.1: ProductRetriever - BPMN specification

Page 94: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

80 Chapter 5. Implementation

5.2 WSOTF tool

5.2.1 Introduction

WSOTF (Web Service, Online Testing Framework) is developed based on algorithm 3(section §3.3.4). We can use this to test a web service based (WSDL specification) or aweb service orchestration. It is implemented by Java and its detailed architecture is shownin figure 5.2 that consists of two parts: the controller and the adapter. The controller iscomposed of five main components: a loader, an interface to process data, a data functionlibrary, an interface to send/receive the messages to/from SUT and an executer.

Figure 5.2: Architecture of the WSOTF engine

1. Loader: loads the input format and analyses it;

2. DataProcess Interface: used to generate the data content, get the name of themessage and query the value of the message. To implement this interface, we reusethe code of SoapUI [58] to generate a SOAP format. The data for each field of aSOAP message is randomly generated or a default value is used. This depends on theconfiguration file;

Page 95: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

5.2. WSOTF tool 81

3. Data function library: defines a list of functions that are used to update the variablesor evaluate a boolean expression. The current version, only variables with data typesint, boolean or string are supported to control the internal behaviour;

4. Executer: implements online testing algorithm to generate a test case, controls testexecution and assigns the verdict. It uses the data process interface and the datafunction library to generate SOAP message. It updates the value of variables andevaluates the constraints;

5. Test Execution Interface: used to send and receive the message to/from SUT. Toimplement it, we used a http client to invoke the request into SUT (the client requestor partner callback in the case of asynchronous services, the result is returned on adifferent port) and http server to receive and return the message from SUT on the sameport. When test execution receives a SOAP message from SUT, it sets this messageinto a queue to wait for a processing of the executer. It receives directly SOAP messagefrom the executer and sends it to SUT.

Figure 5.3 shows an input format example of WSOTF. This format consists of a part-ner section that declares the partners name and location of wsdl specification, variables list(variable types are: int, boolean or message type of SOAP that is defined in WSDL), localclock, initial state, and a list of transitions. Each transition consists of seven fields: sourcestate, target state, event name, guard on variable, guard on clock, data update function andlocal clock to be reset. In WSOTF, we use the form ?pl.pt.op.msg to represent a formatof an input action that means the reception of the message (msg) for the operator (op) ofthe portTyte pt from the partner (pl) and the form !pl.pt.op.msg represents a format of anoutput action (resp. the emission of the message (msg) for the operator (op) of the portTytept to the partner (pl)). The result of WSOTF (traces including interval time between twoactions and its correspondent verdict) is saved in a xml file. This tool enables the declarationof the value of some fields of SOAP message in an enumeration file that can be used to testby purpose (fix the condition for each branch), correlation data (current version not supportyet the correlation data functions) or to debug. At each execution time, WSOTF requests anumber N (integer) and it repeats N times to generate N traces if there is no error and thecorresponding verdict is PASS. WSOTF will stop immediately if it finds an error (messagereceive incorrect, or timeout).

This tool returns a XML traces file with the following format:

<?xml ver s i on ="1.0" encoding="UTF−8" standa lone="no"?><traces ><trace id="t0"><ac t i on name="xloan . xLoanPT . reques t . requestRequest "

t imedelay ="0.0" type="input"><data><xlo : r eque s t In f o xmlns : x lo="http : / f r . webmov . l a b r i / xloan/">

<id>1</id><name>honorem t a l i a flammato</name><income>9994</income><amount>9997</amount><maxpayment>10005</maxpayment><maxmonth>10007</maxmonth>

</xlo : r eques t In fo ></data>

Page 96: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

82 Chapter 5. Implementation

Figure 5.3: Input format of WSOTF

Page 97: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

5.2. WSOTF tool 83

</act ion>

<ac t i on name="bank . bankPT . approve . approveRequest "t imedelay ="0.05" type="output"><data><aetgt : approveInfo xmlns : ae tg t="http : / f r . webmov . l a b r i /bank/"xmlns : ns1="http : / f r . webmov . l a b r i / xloan/">

<id>1</id><name>honorem t a l i a flammato</name><income>9994</income><amount>9997</amount><maxpayment>10005</maxpayment><maxmonth>10007</maxmonth>

</aetg t : approveInfo></data>

</act ion>

<ac t i on name="bank . bankPT . approve . approveResponse "t imedelay ="0.0" type="input"><data><bank : approveRes xmlns : bank="http : / f r . webmov . l a b r i /bank/">accept </bank : approveRes></data>

</act ion>

<ac t i on name="xloan . xLoanPT . reques t . requestResponse "t imedelay ="0.01" type="output"><data><aetgt : requestRes xmlns : ae tg t="http : / f r . webmov . l a b r i / xloan /"xmlns : ns1="http : / f r . webmov . l a b r i /bank/">accept </aetg t : requestRes></data>

</act ion>

<ac t i on type="delay"><data>60</data>

</act ion>

<ac t i on name="bank . bankPT . cance l . cance lRequest "t imedelay ="0.0" type="output"><data><aetgt : c ance l In xmlns : ae tg t="http : / f r . webmov . l a b r i /bank/"xmlns : ns1="http : / f r . webmov . l a b r i / xloan/">1</aetg t : cance l In></data>

</act ion>

</trace>. . .</traces >

We have developed a graphic user interface (named TEFSM Designer) to design the inputfor WSOTF and to visualize the test results (appendix A.1.1). The test results are shown bytwo types: sequence of messages and path on the graph of TEFSM.

5.2.2 Experimentations and results

In this section, we introduce how to test an orchestration of web service using the WSOTFtool. The tested orchestration is Product Retriever in section §5.1. Because this tool cansimulate all partners of this orchestration, its partners will be simulated by WSOTF instead ofusing the existing its partners (i.e., AmazonUK, AmazonFR, Purchase, CurrencyExchange).To do this, before deploying this orchestration, we modified the endpoints of its partners tothe server handle of WSOFT (for example: http://localhost:8888/wsotf/servicename).

Page 98: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

84 Chapter 5. Implementation

Modeling the Product Retriever orchestration by TEFSM

Lallali et al [94] define many rules to translate a BPEL specification into a TEFSM. Theyuse a local clock and a global clock is optional to verify the time constraints. The valueof input/output message will be handled by a variable. Here, we use these rules to modelthe Product Retriever orchestration by a TEFSM. But the assign activity is not consideredbecause we are only interested in the time constraints and the communicating activities. Thefigure 5.4 presents a graph of ProductRetriever’s TEFSM.

Figure 5.4: TEFSM of ProductRetriever

Page 99: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

5.3. RV4WS tool 85

Test results and analysis

In this section, we introduce the test results of Product Retriever service by applying our tool.Because this case study has the data correlation when the client sends the message getAutho-risationRequest (i.e., firstName, lastName and userId). Our tool now does not support thisproblem when it generates the SOAP messages. But we can declare an enumeration for eachfield of SOAP messages in an enum file. To limit the repeat of coincident cases, we declareda list of fix values for the condition fields, for example: provider=AmazonFR;AmazonUKor maxPrice=1000;900;800 etc. The following table 5.1 resumes the test results of ProductRetriever service using WSOTF. In these test results, we found a fault that is not admitted inthe specification when the product price by Euro of the partner is greater than the maxPrice.

Traces Verdict?getProductRequest(maxPrice=1000) → !getProviderRequest() →

1 ?getProviderResponse(provider=AmazonFR) → !searchItemFRRequest() Pass→?searchItemFRResponse(price=500) → !getProductResponse(price=500)→ delay=60?getProductRequest() → !getProviderRequest() → ?getProviderFault()

2 → !getProductFault() Pass

?getProductRequest(maxPrice=800) → !getProviderRequest() →?getProviderResponse(provider=AmazonUK) → !searchItemUKRequest()

3 →?searchItemUKResponse(price=500) → !getCurrencyRateRequest() → Pass?getCurrencyRateResponse(rate=1.5) → !getProductResponse(price=750)→ delay=60?getProductRequest(maxPrice=1000) → !getProviderRequest() →?getProviderResponse(provider=AmazonUK) → !searchItemUKRequest()

4 →?searchItemUKResponse(price=500) → !getCurrencyRateRequest() → Pass?getCurrencyRateResponse(rate=2) → !getProductResponse(price=1000)→ ?getAuthorizationRequest() → !purchaseAuthorizationRequest()→ ?purchaseAuthorizationResponse() → !getAuthorizationResponse()

?getProductRequest(maxPrice=1000) → !getProviderRequest() →?getProviderResponse(provider=AmazonUK) → !searchItemUKRequest()

5 →?searchItemUKResponse(price=500) → !getCurrencyRateRequest() → Fail?getCurrencyRateResponse(rate=2.5) → FAULT

Table 5.1: Test results of ProductRetriever by WSOTF tool

5.3 RV4WS tool

5.3.1 Introduction

RV4WS (Runtime Verification for Web services) is implemented to verify a web service atruntime based on a set of constraints that are declared by the defined syntax in section 4.2.2.

Page 100: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

86 Chapter 5. Implementation

This tool receives a sequence of messages (message content and its occurrence time) via aTCP/IP port, then verifies the correctness of this sequence. The detail of architecture isshown in figure 5.5.

Figure 5.5: Architecture of the RV4WS tool

One of the most interesting components in this architecture is the checking engine com-ponent that implemented the runtime verification algorithm 6. The engine allows us to verifyeach incoming message without any constraint of order dependencies, so we can apply thisapproach to both of online and offline testing. Also, this algorithm verifies the validationof current message without using any storage memory. In order to use this engine for theother systems, there is a difference between the systems is the data structure of input/outputmessages, we define an interface (i.e., IParseData, shown in the figure 5.6) as an adapterto parse the incoming data of RV4WS. The methods in IParseData are for gathering in-formation from incoming message. getMesssageName() returns the message name from itscontent and queryData() allows us to query a data value from a field of message content.In each concrete case, we will implement this interface. For example, in the case of Webservices, its implementation is the class ParseSoapImpl. This engine has been designed as ajava library and is controlled by a component called Controller, which received a data streamcoming from TCP/IP port.

Figure 5.6: ParseData Interface of RV4WS

Page 101: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

5.3. RV4WS tool 87

The input format for this tool is a xml file that has been defined in figure 5.7. A rulewith a true verdict represents a permission and a false verdict represents a prohibition. Acontext of rule will be expressed as an expression with two operators AND and OR. Eachdata correlation is defined as a property with some query expressions from the different SOAPmessages. In the case of web services, we have developed a Graphic User Interface (GUI) thatallows us to easily define a set of rules from WSDL files (appendix A.2).

Figure 5.7: Rule format example

If a rule is found to be not satisfying, the checking algorithm returns a fail verdict. Thisrule may be not applied to the current message. To know which rule has failed at an arrivalmessage, we have also presented a Graphic User Interface (GUI) that is used to visualizesome statistical properties, calculated at any moment of testing process. Whenever a rule isactivated, this means that its conditions have been satisfied and a statistical property such astype counter will be used to compute the percentage of un-satisfying time when applying therule on the input data stream. If the rule was satisfied, we need to know the time duration fromthe activating moment to its context’s holding moment. We have three statistical propertiesabout time (time-min, time-max and time-average) for each rule.

Now we need to know the values of these statistical properties and also visualize therelationships between them. For example, one rule executing shows its fail percentage inproportion to its duration time or to others properties. If we had used a histogram viewand applied it for each, we would not have been able to get this information because of thedifferent scales of these properties. We built a visual interface which is based on the ideaof parallel coordinates scheme, introduced by Inselberg [86]. In information visualization,parallel coordinates view is used to show the relationships between items in a multidimensionaldataset. Each axes in this view parallel to each other and a point in n-dimensional space isrepresented as a polyline with vertices on these axes. Considering that list of statisticalproperties of our testing process as a multivariate/multi dimension data, we have applied

Page 102: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

88 Chapter 5. Implementation

this visualization to RV4WS tool and made it possible to explore the result of our checkingalgorithms. As said earlier, we have implemented the checking algorithms inside RV4WS toolwhich enables a user-tester to verify these conditions defined in rules. Then the user-testerdiscovers that rule’s properties change over time and he or she often needs a complete viewof these traces of testing process. There are the parallel coordinates views correspondent torules. In the figure 5.8, each scheme of parallel coordinates represents a time-log of statisticalvalues as these polylines crossing properties axes. Within each view, there is a single polylineper time instance. The lines of current time are always highlighted. So this view enables thetester to visualize rapidly if these changes of executing rule’s properties are interesting or not.Because of this problem, this visualization is refreshed after a duration. It means that it doesnot run in real-time.

Figure 5.8: Checking analysis of RV4WS tool

5.3.2 Experimentations and results

In this section, we present some preliminary results we got after conducting our firstexperimentations on the Product Retriever case study (section §5.1) using RV4WS tool. Soa-pUI [58] is a well known test tool for web services based. We used it in our experiments as aclient of Product Retriever service, sending requests to activate the web service (i.e., BPELprocess). To collect the communicating messages between the Product Retriever service andits partners (including SoapUI), we have developed a proxy that allows us to forward a mes-sage to a specified destination. This allows us to receive and forward from/to some sourcesand destinations. Each connection is handled on a different port. Afterwards, this messageand its time occurrence are also sent to our tool (i.e., RV4WS) to check its correctness. Soa-pUI and Product Retriever service were configured to make connections through the proxy.

Page 103: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

5.3. RV4WS tool 89

The connection information (service name) is also sent to RV4WS to help this tool to easilyidentify which message belongs to which service. Figure 5.9 shows our testbed architecture.

Figure 5.9: Testbed architecture

Rules define

We can define many test purposes to verify the interaction order with partner services. Herewe introduce three test purposes:

1. During the execution of service, if the client receives a ProductFault message, so beforethe Purchase service must return a ProviderFault message. The time constraint of thistest purpose is less important, so we define the maximum interval time between twomessages as 10 seconds.

P(start(ProductFault)|Od∈[10,0]sdone(ProviderFault))

2. If the Purchase service introduces the provider service AmazonUK then the orchestra-tion must contact the CurrencyExchange service within maximum of 10 seconds.

P(start(getProviderResponse[provider = AmazonUK])|Od∈[0,10]s

done(getCurrencyRateRequest))

Page 104: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

90 Chapter 5. Implementation

3. When the client sends an authorization request message to confirm the purchase of aproduct, so it must receive a product response message with the EmptyResponseProductfield be null within maximum one minute ago. In this rule, the data correlation is usedby userId.

P(start(getAuthorizationRequest)|Od∈[1,0]m

done(getProductResponse[EmptyResponseProduct = null]))/(getAuthorizationRequest.userid, getProductResponse.userid)

Checking results

Figure 5.10: Checking analysis of Product Retriever

Figure 5.10 presents the checking analysis of the Product Retriever. This figure indicates:1) the fault messages that are defined in rule 1 do not occur. 2) the getProviderResponsemessage with provider = AmazonUK appeared two times, but the tool did not found amessage getCurrencyRateRequest within 10 seconds from the occurrence time of the messagegetProviderResponse. See figure 5.11, we found the interval time between them is 11 secondsfor the first case and 25 seconds for the second case. 3) the message getAuthorizationRequest

Page 105: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

5.4. Conclusion 91

appeared three times. Before that, the getProductResponse message also appeared with thefield EmptyResponseProduct is empty and the interval time between them is less than oneminute. Figure 5.11 returns the false verdict when the itemSearchResponse arrives becauseat the occurrence time of itemSearchResponse, the time constraint for the second rule (i.e.,10 seconds) does not satify.

Figure 5.11: Trace collection of Product Retriever

5.4 ConclusionThis chapter introduced two implementation tools for web services testing. One, named

WSOTF, is developed based on online testing approach for unit testing. This tool purportsthe complete a test scenario (generating the abstract test case, deriving the concrete test case,executing the test, announcing the verdict), time constraints, and time delay to synchronizethe time with the service under test. It allows us to simulate all the partner services to avoidthe unnecessary damage of the running partners or to reduce the interaction scenarios withless effort etc. There is a limit of current version where the data correlation between inputmessages is not supported.

The other, named RV4WS, focuses on runtime verification. It means that this tool isused to verify a trace execution as either correct or incorrect to a set of rules with timingconstraints, including future and past time. This tool can check a trace execution in parallelwith trace collection engine. Specifically, it checks message by message of a trace sequencewithout storing them.

Page 106: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

92 Chapter 5. Implementation

Page 107: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

Chapter 6

Conclusions and perspectives

Contents6.1 Synthesis and results . . . . . . . . . . . . . . . . . . . . . . . . . . 936.2 Perspectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

This thesis allows me to update knowledge about web services, its composition, the dif-ferent formal models of web service orchestration and some testing approaches applied basedon these models. The purpose of this thesis is to consider some aspects of web service testingas unit testing, integrated testing, passive testing, etc and propose the test method for eachaspect. In the context of finding the test approach for web services that allows me to exploremany interested testing approaches. In the suite, we resume the effectuated works, obtaineresults and discuss some remaining problems for future works.

6.1 Synthesis and resultsIn the context of this thesis and WebMov project, beside updating our knowledge about

web services, its composition, the different formal models of web service orchestration, sometesting approach applied on these models, and many interested testing approaches, we alsoobtained the following results:

Unit testing framework. In chapter §3, we introduced an unit testing frameworkfor web service compostion. This framework focuses on gray-box testing approach and iscomposed of a centralized test architecture, a conformane relation (i.e., xtioco) and an offlinetesting approach that is composed of five steps: (i) modelling the composite of web service bya TEFSM, (ii) generating all test purposes (iii) generating the abstract timed test case, (iv)deriving the concrete timed test case, (v) executing the timed test case and producing theverdict. We also presented the tutorial about two existing tools, named TGSE and BPELUnit,that were used as the support tools for our framework.

Automatic timed test case generation. Test case generation is alway the biggestproblem of all test approaches. In chapter §3, we presented a new method of automatic timedtest case generation from a TEFSM specification using test purpose. This method generates

93

Page 108: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

94 Chapter 6. Conclusions and perspectives

the timed test case by computing the synchronous product of a TEFSM specification and atest purpose. While computing the synchronous product, we also consider a path be eitherfeasible or not to cut the path that does not satisfy the test purpose. Finally, the test case isgenerated by selecting a trace leading to an accept state.

Online testing approach. We have also proposed an online testing approach basedon TEFSM specification of a web service orchestration (section §3.3.4 of chapter §3). Thisapproach likes a debugger, from a current state, an input or a delay action is selected basedthe current data value of the variables and immediately send to Service Under Test (SUT).Then if an expected output is returned within an expected duration, we update the currentstate and repeat until we arrive at a final state. Else a fault verdict is produced. Thisapproach also uses the centralized test architecture and the xtioco conformane relation. Wehave implemented this approach in the WSOTF tool that is introduced in chapter §5.

A method for integrated testing. A composite of web services is a system integratedat runtime and its result depends on its partners or the condition of the real environment.Integrated testing can be performed to verify timing constraints on transmission messagesbetween service under test and its partners in the real environment, this service is eitherready to use or not etc. Passive testing may be used to verify an execution trace of a webservice orchestration. But we know that passive testing is a method in which tester does notinteract directly with the service under test (SUT). Maybe this forces us to spend a long timecollecting the execution trace because we do not know when a session has started and whenit finishes. To improve this problem, we proposed an approach for integrated testing of a webservice orchestration in section §3.4 of chapter §3. This approach interacts directly with SUTby sending the input requests to start a new session, then collecting the trace to analyze. Analgorithm that checks the conformance of an execution trace to a TEFSM is also proposed.

A new runtime verification method. The runtime verification or passive testing is amethod that collects the observable traces of the system by installing a probe and analyzesit to produce a verdict. This method is applied to the running service to verify that a serviceeither satisfies some properties on the real environment or not. In chapter §4, we proposeda new approach to perform passive testing of the web services based on a set of rules. Thisapproach consider time constraints, including future and past time, and data correlationproblems. The algorithm can be used to check either online or offline.

Implementation of the proposed methods. In the context of WebMov project, weimplemented two tools: WSOTF and RV4WS that are introduced in chapter §5. The first toolis implemented from the online testing approach that allows us to generate and simultaneouslyexecute the timed test cases. Using this tool, the complete test scenario (test case and itsdata) is built during test execution. It also supports the time constraints and time delay. Tosupport design the input (i.e., TEFSM) and visualize the results, a graphic user interface isalso developed, named TEFSM designer. The second tool focuses on the runtime verificationproblem which allows us to check the constraints of the messages that are happening in thereal environment.

Page 109: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

6.2. Perspectives 95

6.2 PerspectivesIn this section, we present some existing problems and discuss some open solutions.

Supporting data correlation of the WSOTF tool. The correlation of messages by itsdata content is a popular problem of web services. But our active testing tool (i.e., WSOTF)does not support this problem. Its current version allows us to fix the content of the basedfields (the fields with data type are: int, string, boolean, etc) in a message by declaring itscontent within an enumeration file. We can use this to correlate the messages. In a casewhere two messages are correlated by a complex structure, this tool cannot solve. In the nextversion, we will study to support the problem of data correlation.

Improving online testing algorithm. In the web service orchestration, a part oforchestration where some actions (or sequence of actions) may be run in parallel while therest are executed in sequential. For example, the activities in the flow activity of BPELlanguage. In this case, it is not easy to generate the test cases if the offline testing approachis applied. But with the online approach, in future works, we can extend TEFSM with a setof synchronous states Syn : S 7−→ {on, off} is a mapping that assigns the property to states.The state has property on (resp. off) represents that its following actions will be executedsimultaneously (resp. finishing the concurrence part). Then, we improve our algorithm toprocess simultaneously the activities of a flow activity. When we arrive at a state that has theproperty be on, all following actions (or sequence of actions) will be simultaneously processedwhere each action (or sequence of actions) has a correspondent current state. When we arriveat a state that has the property be off , these current states will be removed and only one iskeeped to continue the rest part of specification.

Test of web service choreography. Most of the works that are presented in this thesisfocus on testing of a web service orchestration which describes the interaction of a mainprocess to other services. A choreography is a composite of web services that describes thecollaboration among services at the communicating message. This collaboration is distributedwhere each service is responsible for a part of the workflow. In the future, we will study howto model a choreography by TEFSM from its specification, for example WS-CDL [144] toapply our approach for timed test case generation. Moreover, we define the architectures totest a choreography or a part of choreography then applying our tools as WSOTF or RV4WSto test them. Figure 6.1 presents an example of choreography and of a part under test.

Page 110: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

96 Chapter 6. Conclusions and perspectives

Figure 6.1: A choreography example and the part under test

Page 111: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

Bibliography

[1] NuSMV: a new symbolic model checker. http://nusmv.irst.itc.it/index.html.[2] International Workshop on Web Services and Formal Methods, WS-FM’04-09. Lecture Notes in

Computer Science, 2004 - 2009.[3] International Standard ISO 9646. Information technology, open systems interconnection, con-

formance testing methodology and framework, 1991.[4] Active-BPEL. http://www.activevos.com/community-open-source.php.[5] G. Alonso, F. Casati, H. Kuno, and V. Machiraju. Web Services - Concepts, Architectures and

Applications. Data-Centric Systems and Applications. Springer Verlag, 2004.[6] Rajeev Alur and David L. Dill. A theory of timed automata. Theoretical computer science,

126(2):183–235, 1994.[7] Rajeev Alur and David L. Dill. Automata-theoretic verification of real-time systems. In Formal

Methods for Real-Time Computing, 1996.[8] César Andrés, Mercedes G. Merayo, and Manuel Núnez. Passive testing of timed systems. In

International Symposium on Automated Technology for Verification and Analysis, volume 5311,pages 418 – 427. LNCS, 2008.

[9] César Andrés, Mercedes G. Merayo, and Manuel Núnez. Formal correctness of a passive testingapproach for timed systems. In IEEE International Conference on Software Testing, Verification,and Validation Workshops, pages 67–76, Denver, Colorado , USA, April 01- 04 2009.

[10] César Andrés, Mercedes G. Merayo, and Manuel Núnez. Passive testing of stochastic timedsystems. In International Conference on Software Testing Verification and Validation, pages 71– 80, Denver, Colorado , USA, April 1-4 2009.

[11] Xiaoying Bai, Wenli Dong, Wei-Tek Tsai, and Yinong Chen. WSDL-based automatic test casegeneration for web services testing. In International Workshop on Service-Oriented SystemEngineering (SOSE’05), pages 207–212, Beijing, October 2005. IEEE Computer Society.

[12] Luciano Baresi and Sam Guinea. Towards dynamic monitoring of ws-bpel processes. In ThirdInternational Conference on Service-Oriented Computing, pages 269–282, Amsterdam, TheNetherlands, Dec 12-15 2005.

[13] Luciano Baresi, Sam Guinea, Raman Kazhamiakin, and Marco Pistore. An integrated approachfor the Run-Time monitoring of BPEL orchestrations. In 1st European Conference on Towardsa Service-Based Internet, pages 1–12, Madrid, Spain, 2008. Springer-Verlag.

[14] Luciano Baresi, Sam Guinea, Marco Pistore, and Michele Trainotti. Dynamo + Astro: Anintegrated approach for BPEL monitoring. In 2009 IEEE International Conference on WebService, pages 230 – 237, Los Angeles, CA, USA, July 6-10 2009.

[15] Howard Barringer, Allen Goldberg, Klaus Havelund, and Koushik Sen. Rule-based runtimeverification. In 5th International Conference on Verification, Model Checking, and AbstractInterpretation, pages 277–306, Venice, Italy, Jan 11-13 2004.

97

Page 112: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

98 Bibliography

[16] Cesare Bartolini, Antonia Bertolino, Eda Marchetti, and Andrea Polini. WS-TAXI: A WSDL-based testing tool for web services. In 2009 International Conference on Software Testing Ver-ification and Validation, pages 326–335, Denver, Colorado - USA, April 01- 04 2009. IEEEComputer Society.

[17] Emmanuel Bayse, Ana Cavalli, Manuel Nunez, and Fatiha Zaidi. A passive testing approachbased on invariants: application to the WAP. Computer Networks, 48:247–266, 2005.

[18] Gerd Behrmann, Alexandre David, and Kim G. Larsen. A Tutorial on Uppaal. Aalborg Univer-sity, Denmark, November 2004.

[19] Axel Belinfante. JTorX: A tool for on-line model-driven test derivation and execution. In Toolsand Algorithms for the Construction and Analysis of Systems, volume 6015, pages 266–270.Springer Berlin, 2010.

[20] Johan Bengtsson and Wang Yi. Timed automata: Semantics, algorithms and tools. SpringerBerlin : Lectures on Concurrency and Petri Nets, 3098/2004:87–124, 2004.

[21] A. Benharref, R. Glitho, and R. Dssouli. A web service based-architecture for detecting faultsin web services. In IFIP/IEEE International Symposium on Intergrated Network Management,Nice - Acropoli, France, 15-19 May 2005. IEEE.

[22] Abdelghani Benharref, Rachida Dssouli, Mohamed Adel Serhani, Abdeslam En-Nouaary, andRoch Glitho. New approach for EFSM-based passive testing of web services. Testing of Softwareand Communicating Systems, 4581:13–27, 2007.

[23] Lina Bentakouk, Pascal Poizat, and Fatiha Zaïdi. A formal framework for service orchestrationtesting based on symbolic transition systems. In TestCom/FATES, pages 16–32, 2009.

[24] Ismail Berrada. Modélisation, Analyse et Test des systems communicants à contraintes tem-porelles : Vers une approche ouverte du test. PhD thesis, University Bordeaux 1, 2005.

[25] Ismail Berrada, Richard Castanet, and Patrick Felix. A formal approach for real-time testgeneration. In Workshop On Testing Real-Time and Embedded Systems (WTRTES), SatelliteWorkshop of Formal Methods (FM), pages 15–30, Pisa, Italy, September 2003.

[26] Ismail Berrada, Richard Castanet, and Patrick Felix. Testing communicating systems: a model,a methodology, and a tool. In TestCom 2005, volume LNCS 3502, pages 111–128, 2005.

[27] Ismail Berrada and Patrick Felix. TGSE : Un outil générique pour le test. In Proc. of CFIP’2005,pages 67–84. Hermès, 2005.

[28] Antonia Bertolino, Guglielmo De Angelisand Lars Frantzen, and Andrea Polini. The PLASTICframework and tools for testing service-oriented applications. In International Summer Schoolon Software Engineering, volume 5413, pages 106–139. LNCS, 2009.

[29] Antonia Bertolino, Lars Frantzen, Andrea Polini, and JanTretmans. Audition of web servicesfor testing conformance to open specified protocols. Architecting Systems with TrustworthyComponents, 3938/2006:1–25, 2006.

[30] Antonia Bertolino and Andrea Polini. The audition framework for testing web services interoper-ability. In 31st EUROMICRO Conference on Software Engineering and Advanced Applications,pages 134 – 142. IEEE Computer Society, 30 Aug - 3 Sept 2005.

[31] Faycal Bessayah, Ana Cavalli, Willian Maja, Eliane Martins, and Andre Willik Valenti. Afault injection tool for testing web services composition. In TAIC PART 2010, Windsor, UK,September 3 - 5 2010.

[32] Henrik Bohnenkamp and Axel Belinfante. Timed testing with TorX. In FM 2005: FormalMethods, pages 173–188, Newcastle, UK, July 18-22, 2005. Lecture Notes in Computer Science.

Page 113: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

Bibliography 99

[33] Adilson Luiz Bonifácio, Arnaldo Vieira Moura, Adenilso da Silva Simao, and José Carlos Maldon-ado. Towards deriving test sequences by model checking. Electronic Notes Theoretical ComputerScience, 195:21–40, 2008.

[34] BPELUnit. The open source unit testing framework for BPEL. http://www.bpelunit.net/.

[35] Antonio Bucchiarone and Stefania Gnesi. A survey on service composition languages and models.In International Workshop on Web Services Modeling and Testing (WsMaTe 2006), Palermo,Italy, June 2006.

[36] Antonio Bucchiarone, Hernan Melgratti, and Francesco Severoni. Testing service composition.In 8th Argentine Symposium on Software Engineering, Mar del Plata, Argentina, 29-31 August2007.

[37] CADP. TGV manual page. http://www.inrialpes.fr/vasy/cadp/man/tgv.html.

[38] Gerardo Canfora and Massimiliano Di Penta. SOA: Testing and self-checking. In InternationalWorkshop on Web Services - Modeling and Testing, 2006.

[39] Honghua Cao, Shi Ying, and Dehui Du. Towards model-based verification of BPEL with modelchecking. In Proc. Sixth IEEE International Conference on Computer and Information Tech-nology CIT’06, pages 190–190, Sept. 2006.

[40] Tien-Dung Cao, Patrick Felix, and Richard Castanet. WSOTF: An automatic testing tool forweb services composition. In The Fifth International Conference on Internet and Web Applica-tions and Services, pages 7 – 12, Barcelona, Spain, May 9 - 15, 2010.

[41] Tien-Dung Cao, Patrick Felix, Richard Castanet, and Ismail Berrada. Testing web servicescomposition using the TGSE tool. In SERVICES ’09: Proceedings of the 2009 Congress onServices - I, pages 187–194, Los Angeles, CA, USA, 2009. IEEE Computer Society.

[42] Tien-Dung Cao, Patrick Felix, Richard Castanet, and Ismail Berrada. Online testing frame-work for web services. In Third International Conference on Software Testing, Verification andValidation, pages 363 – 372, Paris, France., April 6 - 9, 2010.

[43] Tien-Dung Cao, Trung-Tien Phan-Quang, Patrick Felix, and Richard Castanet. Automatedruntime verification for web services. In 2010 The IEEE International Conference on WebServices., pages 76–82, Miami, Florida, USA, July 5-10, 2010.

[44] R. Castanet, O. Kone, and P. Laurencot. On the fly test generation for real time protocols. In7th International Conference on Computer Communications and Networks,, pages 378 – 385,Lafayette, LA , USA, Oct, 12 -15 1998.

[45] Ana Cavalli, Azzedine Benameur, Wissam Mallouli, and Keqin Li. A passive testing approachfor security checking and its pratical usage for web services monitoring. In NOTERE 2009,Montreal, Canada, 2009.

[46] Ana Cavalli, Tien-Dung Cao, Wissam Mallouli, Elilan Martins, Andrey Sadovykh, SebastienSalva, and Fatiha Zaidi. WebMov: An dedicated framework for the modelling and testing ofweb services composition. In 2010 The IEEE International Conference on Web Services., pages377–384, Miami, Florida, USA, July 5-10, 2010.

[47] Ana Cavalli, Caroline Gervy, and Svetlana Prokopenko. New approaches for passive testing usingan extended finite state machine specification. Information ans software technology, 45:837–852,2003.

[48] Ana Cavalli, David Lee, Christian Rinderknecht, and Fatiha Zaidi. Hit- or- jump an algo-rithm for embedded testing with applications to in services. In IFIP International conferenceFORTE/PSTV’99, Beijing, China, 5-8 October 1999.

Page 114: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

100 Bibliography

[49] Ana Cavalli, Edgardo Montes De Oca, WissamMallouli, and Mounir Lallali. Two complementarytools for the formal testing of distributed systems with time constraints. In 12th IEEE/ACMInternational Symposium on Distributed Simulation and Real-Time Applications, pages 315–318.IEEE Computer Society, 2008.

[50] Denmark CPN Group, University of Aarhus. CPN Tools: Computer tool for coloured petri nets.http://wiki.daimi.au.dk/cpntools/_home.wiki.

[51] Frederic Cuppens, Nora Cuppens-Boulahia, and Thierry Sans. Nomad: A security model withnon atomic actions and deadlines. In 18th IEEE workshop on Computer Security Foundations,pages 186–196. IEEE Computer Society, 2005.

[52] Elisângela de Araújo Rodrigues Vieira. Automated Model-based Test Generation for TimedSystems. PhD thesis, University of Evry-Val d’Essonne, 2007.

[53] Wen-Li Dong, Hang Yu, and Yu-Bing Zhang. Testing BPEL-based web service compositionusing high-level petri nets. In International Enterprise Distributed Object Computing Conference(EDOC’06). IEEE Computer Society, 2006.

[54] D Dranidis, E Ramollari, and D. Kourtesis. Run-time verification of behavioural conformancefor conversational web services. In 2009 Seventh IEEE European Conference on Web Services,pages 139 – 147, Eindhoven, The Netherlands, Nov 9 - 11 2009.

[55] Abdeslam En-Nouaary and Rachida Dssouli. A guided method for testing timed input outputautomata. In TestCom2003, pages 211–225, 2003.

[56] Abdeslam En-Nouaary, Rachide Dssouli, Ferhat Khendek, and Abdelkader Elqortobi. Timed testcases generation based on state characterization technique. In Real-Time Systems Symposium,Madrid, Spain, 1998. IEEE Computer Society.

[57] Martin Evan, Basu Suranjana, and Xie Tao. WebSob: A tool for robustness testing of webservices. In 29th International Conference on Companion Software Engineering ICSE2007, pages65–66, 20-26 May 2007.

[58] Eviware. http://www.eviware.com/.

[59] H. Foster, S. Uchitel, J. Magee, and J. Kramer. Model-based verification of web service com-positions. In Proc. 18th IEEE International Conference on Automated Software Engineering,pages 152–161, 6-10 Oct. 2003.

[60] Lars Frantzen, Maria de las Nieves Huerta, Zsolt Gere Kiss, and Thomas Wallet. On-the-flymodel-based testing of web services with Jambition. In Web Services and Formal Methods,volume 5387 of LNCS, pages 143–157. Springer Berlin, 2009.

[61] Lars Frantzen, Jan Tretmans, and Rene de Vries. Towards model-based testing of web services.In International Workshop on Web Services - Modeling and Testing - WS-MaTe2006, pages67–82, Palermo, Italy, 2006.

[62] Chen Fu, Barbara Ryder, Ana Milanova, and David Wonnacott. Testing of java web servicesfor robustness. In ACM SIGSOFT Software Engineering Notes, volume 29, pages 23 – 34. ACMNew York, NY, USA, July 2004.

[63] Xiang Fu. Formal Specification and Verification of Asynchronously Communicating Web Ser-vices. PhD thesis, University of California, September 2004.

[64] Xiang Fu, Tevfik Bultan, and Jianwen Su. Analysis of interacting BPEL web services. InInternational Conference on World Wide Web (WWW04), 2004.

[65] Xiang Fu, Tevfik Bultan, and Jianwen Su. WSAT: A tool for formal analysis ofweb services. InThe 16th International Conference on Computer Aided Verification, 2004.

Page 115: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

Bibliography 101

[66] José García-Fanjul, Claudio de la Riva, and Javier Tuya. Generation of conformance test suitesfor compositions of web services using model checking. In Proceedings of the Testing: Academic& Industrial Conference - Practice And Research Techniques (TAIC PART’06), 2006.

[67] José Garcia-Fanjul, Javier Tuya, and Claudio de la Riva. Generating test case specifications forBPEL compositions of web services using SPIN. In International Wrokshop on Web ServicesModeling and testing (WS-MaTe’06), 2006.

[68] Christophe Gaston, Pascale Le Gall, Nicolas Rapin, and Assia Touil. Symbolic execution tech-niques for test purpose definition. In TestCom2006, volume 3964, pages 1–18. LNCS, 2006.

[69] Arthur Gill. Introduction to the Theory of Finite-State Machines. New York : McGraw-Hill,1962.

[70] Stefania Gnesi, Diego Latella, and Mieke Massink. Formal test-case generation for UML state-charts. In Proceedings. Ninth IEEE International Conference on Engineering Complex ComputerSystems, pages 75– 84, 14-16 April 2004.

[71] A . Goldberg and K. Havelund. Automated runtime verification with eagle. In Verification andValidation of Enterprise Information Systems, Miami, USA, May 24 2005.

[72] Ariane Gravel, Xiang Fu, and Jianwen Su. An analysis tool for execution of BPEL services. InThe 9th IEEE International Conference on E-Commerce Technology and the 4th IEEE Inter-national Conference on Enterprise Computing, E-Commerce, and E-Services, CEC/EEE, pages429–432. IEEE Computer Society, 23-26 July 2007.

[73] R. Heckel and L. Mariani. Automatic conformance testing of web services. In FundamentalApproaches to Software Engineering, volume 3442/2005, pages 34–48, Edinburgh, Scotland, 2-10 April 2005. Springer.

[74] Anders Hessel, Kim G. Larsen, Marius Mikucionis, Brian Nielsen, Paul Pettersson, and ArneSkou. Formal Methods and Testing, chapter Testing Real-time systems using UPPAAL, pages77–117. Springer Berlin / Heidelberg, 2008.

[75] Anders Hessel, Kim G. Larsen, Brian Nielsen, Paul Pettersson, and Arne Skou. Time-optimalreal-time test case generation using UPPAAL. In 3rd International Workshop on Formal Ap-proches to Testing of Software (FATES’03), 2003.

[76] Anders Hessel and Paul Pettersson. A test case generation algorithm for real-time systems. InInternational Conference on Quality Software QSIC’04, pages 268–273, Germany, 8-9 September2004. IEEE Computer Society.

[77] Anders Hessel and Paul Pettersson. A global algorithm for model-based test suite generation.In Third Workshop on Model-Based Testing, March 31 - April 1 2007.

[78] Teruo Higashinoy, Akio Nakatayy, Kenichi Taniguchi, and Ana R. Cavalli. Generating test casesfor a timed i/o automaton model. In IFIP IWTCS’99, Budapest Hungary, 1-3 September 1999.

[79] Sebastian Hinz, Karsten Schmidt, and Christian Stahl. Transforming bpel to petri nets. In The3rd International Conference on Business Process Management, volume 3649, pages 220–235,Nancy, France, September 5-8 2005. LNCS.

[80] Ian Ho and Jin-Cherng Lin. Generating test cases for real-time software by time Petri Netsmodel. In 8th Asian Test Symposium, pages 295–301. IEEE Computer Society, 1999.

[81] Gerard J. Holzmann. The model checker SPIN. IEEE Transactions on software engineering,23(5):279–295, MAY 1997.

[82] Hyoung Seok Hong, Yong Rae Kwon, and Sung Deok Cha. Testing of object-oriented programsbased on finite state machines. In In the Second Asia Pacific Software Engineering Conference,1995.

Page 116: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

102 Bibliography

[83] Hyoung Seok Hong, Insup Lee, Oleg Sokolsky, and Sung Deok Cha. Automatic test generationfrom statecharts using model checking. Technical report, University of Pennsylvalia, 2001.

[84] Jun Hou, Baowen Xu, Lei Xu, Di Wang, and Junling Xu. A testing method for web servicescomposition based on data-flow. Wuhan University Journal of Natural Sciences, 13(4):455–460,August 2008.

[85] H. Huang, W. T. Tsai, R. Paul, and Y. Chen. Automated model checking and testing forcomposite web services. In International Symposium on Object-oriented Real-time distributedComputing (ISORC), pages 300–307, Seattle, May 2005. IEEE.

[86] Alfred Inselberg. The plane with parallel coordinates. The Visual Computer, 1(2):69 – 91, 1985.

[87] Claude Jard and Thierry Jéron. TGV: theory, principles and algorithms. International Journalon Software Tools for Technology Transfer, 7(4):297–315, 2005.

[88] T Jéron. Génération de tests pour les systèmes réactifs et temporisés. Ecole d’Eté Temps-Réel,Télécom ParisTech, Paris, September 2009.

[89] R. Kazhamiakin, P. Pandya, and M. Pistore. Timed modeling and analysis in web servicecompositions. In The First International Conference on Availability, Reliability and Security,pages 840 – 846, 2006.

[90] ChangSup Keum, Sungwon Kang, In-Young Ko, Jongmoon Baik, and Young-Il Choi. Generatingtest cases for web services using extended finite state machine. In IFIP International Conferenceon Testing Communicating Systems (TESTCOM2006), volume 3964/2006, pages 103–117, NewYork, 6-18 May 2006. Springer Berlin.

[91] Y.G. Kim, H.S. Hong, D.H. Bae, and S.D Cha. Test cases generation from UML state diagrams.IEE Proceedings Software, 146(4):187–192, August 1999.

[92] Moez Krichen and Stavros Tripakis. Black-box conformance testing for real-time systems. InS. Graf and L. Mounier, editors, SPIN2004, volume LNCS 2989, pages 109–126. Springer-VerlagBerlin Heidelberg, 2004.

[93] Mounir Lallali. Modélisation et test fonctionnel de l’orchestration de services Web. PhD thesis,Telecom Sud Paris, 2009.

[94] Mounir Lallali, Fatiha Zaidi, and Ana Cavalli. Timed modeling of web services composition forautomatic testing. In SITIS ’07: Proceedings of the 2007 Third International IEEE Confer-ence on Signal-Image Technologies and Internet-Based System, pages 417–426, Shangai, China,December 16-19, 2007. IEEE Computer Society.

[95] Mounir Lallali, Fatiha Zaidi, and Ana Cavalli. Transforming BPEL into intermediate formatlanguage for web services composition testing. In International Conference on Next GenerationWeb Services Practices, Seoul, October 2008.

[96] Mounir Lallali, Fatiha Zaidi, Ana Cavalli, and Iksoon Hwang. Automatic timed test case gen-eration for web services composition. In European Conference on Web Services (ECOWS’08),Dublin, Ireland, November 2008.

[97] Kim G. Larsen, Marius Mikucionis, and Brian Nielsen. Online testing of real-time systems usingUPPAAL. In Formal Approaches to Software Testing, pages 79–94, Linz, Austria, September 212004. Springer Berlin.

[98] David Lee and Mihalis Yannakakis. Principles and methods of testing finite state machines. Asurvey, 84:1090 – 1123, 1996.

[99] J. Jenny Li and W. Eric Wong. Automatic test generation from communicating extended finitestate machine (cefsm)-based models. In International Symposium on Object-Oriented Real-TimeDistributed Computing (ISORC’02). IEEE Computer Society, 2002.

Page 117: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

Bibliography 103

[100] Zheng Li, Jun Han, and Yan Jin. Pattern-based specification and validation of web servicesinteraction properties. In 3rd International Conference on Service Oriented Computing, pages73–86, Amsterdam, The Netherlands, Dec 12-15 2005.

[101] Zheng Li, Yan Jin, and Jun Han. A runtime monitoring and validation framework for webservice interactions. In The Australian Software Engineering Conference ASWEC’06, pages70–79, Washington, DC, USA, 2006. IEEE Computer Society.

[102] Zhong Jie Li, Wei Sun, Zhong Bo Jiang, and Xin Zhang. BPEL4WS unit testing: Framework andimplementation. In IEEE International Conference on Web Service, pages 103–110, Orlando,FL, USA, July 11-15, 2005.

[103] Chien-Hung Liu, Shu-Ling Chen, and Xue-Yuan Li. A ws-bpel based structural testing approachfor web service compositions. In IEEE International Workshop on Service-Oriented SystemEngineering,, pages 135–141, Jhongli, Taiwan, December 18-19 2008. IEEE Computer Society.

[104] Nik Looker, Malcolm Munro, and Jie Xu. Simulating errors in web services. InternationalJournal of Simulation Systems, Science & Technology, 5(5):29–37, December 2004.

[105] Nik Looker, Malcolm Munro, and Jie Xu. WS-FIT: A tool for dependability analysis of web ser-vices. In Annual International Computer Software and Applications Conference (COMPSAC’04),2004.

[106] Nik Looker, Malcolm Munro, and Jie Xu. A comparison of network level fault injection with codeinsertion. In Annual International Computer Software and Applications Conference (COMP-SAC’04), pages 479 – 484, 2005.

[107] Nik Looker and Jie Xu. Assessing the dependability of soap rpc-based web services by faultinjection. In IEEE International Workshop on Object-Oriented Real-Time Dependable Systems,pages 163–170, Italy, October 1 - 3 2003.

[108] Wissam Mallouli, Faycal Bessayah, Ana Cavalli, and Azzedine Benameur. Security rules speci-fication and analysis based on passive testing. In 2008 IEEE Global Telecommunications Con-ference, pages 1 – 6, New Orleans, LA, USA, Nov 30 - Dec 4 2008.

[109] Wissam Mallouli, Amel Mammar, and Ana R. Cavalli. A formal framework to integrate timedsecurity rules within a TEFSM-based system specification. In 16th Asia-Pacific Software Engi-neering Conference, pages 489–496. IEEE Computer Society, 2009.

[110] Philip Mayer. Design and implementation of a framework for testing BPEL compositions. Mas-ter’s thesis, Leibniz, Hannover, Germany„ September 2006.

[111] Philip Mayer and Daniel Lubke. Towards a bpel unit testing framework. InWorkshop on Testing,Analysis and Verification of Web Software, pages 33–42, Portland, Maine, USA, July 17 2006.

[112] Mercedes G. Merayo, Manuel Núnez, and Ismael Rodríguez. Formal testing from timed finitestate machines. Computer Networks, 52(2):432 – 460, 2008.

[113] Marius Mikucionis, Kim G. Larsen, and Brian Nielsen. Online on-the-fly testing of real-timesystems. Technical Report ISSN 0909-0878, Basic Research In Computer Science, Aalborg,Denmark, December 2003.

[114] Marius Mikucionis, Kim G. Larsen, and Brian Nielsen. T-UPPAAL: Online model-based testingof real-time systems. In 19th IEEE international conference on Automated software engineering,pages 396–397, Linz, Austria, Sept 24 2004. IEEE Computer Society.

[115] Montimage. Webmov case studies: definition of functional requirements and test purposes.Technical Report WEBMOV-FC-D5.1/T5.1, WebMov, 2009.

[116] S. Nakajima. Verification of web service flows with model-checking techniques. In First Inter-national Symposium on Cyber Worlds, pages 378–386, Tokyo, Japan, November 6-8 2002.

Page 118: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

104 Bibliography

[117] Manuel Núnez and Ismael Rodríguez. Conformance testing relations for timed systems. InFATES 2005, pages 103 – 117. Springer Berlin, 2005.

[118] OASIS. Uddi spec technical committee. http://www.uddi.org/pubs/uddi_v3.htm.

[119] Jeff Offutt and Wuzhi Xu. Generating test cases for web services using data perturbation. InWorkshop on Testing, Analysis and Verification of Web Services, July 2004.

[120] Chun Ouyang, Eric Verbeek, Wil M. P. van der Aalst, Stephan Breutel, Marlon Dumas, andArthur. Formal semantics and analysis of control flow in WS-BPEL. Sci. Comput. Program.,67(2-3):162–198, 2007.

[121] Chun Ouyang, Eric Verbeek, Wil M.P. van der Aalst, Stephan W. Breutel, Marlon Dumas, andArthur H.M ter Hofstede. WofBPEL: A tool for automated analysis of BPEL processes. InThird International Conference on Service Oriented Computing (ICSOC 2005), volume 3826,pages 484–489, Amsterdam, The Netherlands, December 12-15 2005. LNCS.

[122] PushToTest. http://www.pushtotest.com/.

[123] Sylvain Rampacek. Sémantique, interactions et langages de description des services web com-plexes. PhD thesis, University of Reims Champagne-Ardenne, November 2006.

[124] Vlad Rusu, Lydie du Bousquet, and Thierry Jéron. An approach to symbolic test generation. InInternational Conference on Integrating Formal Methods, LNCS 1945, pages 338–357. SpringerVerlag, November 2000.

[125] Sébastien Salva and Issam Rabhi. Automatic web service robustness testing from wsdl descrip-tions. In 12th European Workshop on Dependable Computing, Toulouse, France, 14-15, May2009.

[126] Sébastien Salva and Antoine Rollet. Testabilité des services web. In Ingénierie des Systemesd’Information 13, pages 35–58, 2008.

[127] Karsten Schmidt. LoLA: a low level petri net analyzer. http://www2.informatik.hu-berlin.de/top/lola/lola.html.

[128] Oleg Sokolsky, Usa Sammapun, Insup Lee, and Jesung Kim. Run-time checking of dynamicproperties. Electron. Notes Theor. Comput. Sci., 144(4):91–108, 2006.

[129] Jan Springintveld, Frits Vaandrager, and Pedro R. D’Argenio. Testing timed automata. Theo-retical Computer Science, 254(1-2):225–257, March 2001.

[130] Christian Stahl. BPEL2PN overview. http://www2.informatik.hu-berlin.de/top/bpel2pn/.

[131] Christian Stahl. A petri net semantics for bpel. Technical report, Humboldt-Universitat zuBerlin, 2005.

[132] OASIS Standard. Web services business process execution language version 2.0, April 2007.http://docs.oasis-open.org/wsbpel/2.0/OS/wsbpel-v2.0-OS.html.

[133] M. Tabourier and A. Cavalli. Passive testing and application to the GSM-MAP protocol. In-formation ans software technology, 41:813–821, 1999.

[134] G. J. Tretmans and H. Brinksma. TorX: Automated model-based testing. In First EuropeanConference on Model-Driven Software Engineering, pages 31–43, Nuremberg, Germany, Decem-ber 11-12 2003.

[135] Jan Tretmans. A Formal Approach to Conformance Testing. PhD thesis, University of Twente,1992.

Page 119: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

Bibliography 105

[136] Jan Tretmans. Testing concurrent systems: A formal approach. In CONCUR’99 - 10th In-ternational Conference on Concurrency Theory, volume 1664, pages 46–65, Eindhoven, TheNetherlands, 1999. Springer Verlag.

[137] W. T. Tsai, X. Wei, Y. Chen, and R. Paul. A robust testing framework for verifying web servicesby completeness and consistency analysis. In International Workshop on Service-Oriented SystemEngineering (SOSE), pages 151–158, Beijing, October 2005. IEEE.

[138] W. T. Tsai, X. Wei, Y. Chen, B. Xiao, R. Paul, and H. Huang. Developing and assuring trustwor-thy web services. In Proceedings of 7th International Symposium on Autonomous DecentralizedSystems (ISADS), pages 43–50, Chengdu, China, April 4 - 8 2005.

[139] Wei-Tek Tsai, Yinong Chen, and Ray Paul. Specification-based verification and validation of webservices and service-oriented operating systems. In Proceedings of the 10th IEEE InternationalWorkshop on Object-Oriented Real-Time Dependable Systems (WORDS’05), pages 139 – 147.IEEE Computer Society, 2-4 Feb 2005.

[140] W.T. Tsai, Ray Paul, Yamin Wang, Chun Fan, and Dong Wang. Extending WSDL to facili-tate web services testing. In International Symposium on High Assurance Systems Engineering(HASE’02). IEEE Computer Society, 2002.

[141] W.T. Tsai, X. Wei, Y. Chen, R. Paul, and B. Xiao. Swiss cheese test case generation forweb service testing. IEICE Transactions on Information and Systems, E88-D(12):2691 – 2698,December 2005.

[142] Verimag. IF tutorials. http://www-if.imag.fr/tutorials.html.

[143] W3C. WSDL Version 1.1, March 2001. http://www.w3.org/TR/wsdl.

[144] W3C. Web services choreography description language version 1.0, 2002. http://www.w3.org/TR/2004/WD-ws-cdl-10-20041217/.

[145] W3C. Owl-s: Semantic markup for web services, 2004. http://www.w3.org/Submission/OWL-S/.

[146] W3C. SOAP Version 1.2, April 2007. http://www.w3.org/TR/soap12-part0/.

[147] A. Wombacher, P. Fankhauser, and E. Neuhold. Transforming bpel into annotated deterministicfinite state automata for service discovery. In Procs of ICWS’04, 2004.

[148] Jun Yan, Zhongjie Li, Yuan Yuan, Wei Sun, and Jian Zhang. BPEL4WS unit testing: Testcase generation using a concurrent path analysis approach. In 17th International Symposium onSoftware Reliability Engineering, pages 75–84, Raleigh, North Carolina, USA, November 07-102006.

[149] YanPing Yang, QingPing Tan, and Yong Xiao. Verifying web services composition based onhierarchical colored petri nets. In IHIS ’05: Proceedings of the first international workshop onInteroperability of heterogeneous information systems, pages 47–54, New York, NY, USA, 2005.ACM.

[150] YanPing Yang, QingPing Tan, JinShan Yu, and Feng Liu. Transformation bpel to cp-nets forverifying web services composition. International Conference on Next Generation Web ServicesPractices, 0:137–142, 2005.

[151] Hsu-Chun Yen. Introduction to Petri Net Theory, volume 25 of Studies in Computational Intel-ligence. Springer Berlin / Heidelberg, 2006.

[152] Yuan Yuan, Zhongjie Li, and Wei Sun. A graph-search based approach to bpel4ws test gener-ation. In International Conference on Software Engineering Advances, page 14, Tahiti, FrenchPolynesia, USA, October 29-November 03 2006. IEEE Computer Society.

Page 120: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

106 Bibliography

[153] Fatiha Zaidi. Contribution à la génération de tests pour les composants de service. Applicationaux services de Réseau Intélligent. PhD thesis, University of Evry Val d’Essonne, 2001.

[154] Yongyan Zheng and Paul Krause. Automata semantics and analysis of BPEL. In InternationalConference on Digital Ecosystems and Technologies, (DEST’07). IEEE Computer Society, 2007.

[155] Yongyan Zheng, Jiong Zhou, and Paul Krause. Analysis of BPEL data dependencies. In 33rdEUROMICRO Conference on Software Engineering and Advanced Applications., pages 351–358.IEEE Computer Society, 28-31 August 2007.

[156] Yongyan Zheng, Jiong Zhou, and Paul Krause. An automatic test case generation frameworkfor web services. University of Surrey, Journal of software, 2(3):64–77, September 2007.

[157] Yongyan Zheng, Jiong Zhou, and Paul Krause. A model checking based test case generationframework for web services. In International Conference on Information Technology, (ITNG’07).IEEE Computer Society, 2007.

Page 121: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

Appendix A

The tutorial of the tools

A.1 WSOTF• To test a web service orchestration using WSOTF, before deploying the SUT, we must change the

port of the partners by the ports handle of WSOTF to simulate as follows: http://localhost:8888/wsoft/partnername wheremachineName (localhost), port number (8888) and URL base(/wsotf) are the configurations in the wsotf.properties file. The partnername is the name thatis defined in the correspondent WSDL file.

<wsdl : d e f i n i t i o n s name="bank " . . . >. . .<wsdl : s e r v i c e name="bankService">

<wsdl : port b inding="impl : bankPortSoapBinding " name="bankPort"><wsdlsoap : address l o c a t i o n="http :// l o c a l h o s t :8888/ wsot f /bank"/>

</wsdl : port></wsdl : s e rv i c e >

</wsdl : d e f i n i t i o n s >

• The current version, the binding message type is SOAP document [146]. A SOAP message thatis formatted with RPC type is not supported.

<wsdl : b inding name="xLoanBinding " type="impl : xLoanPT"><wsdlsoap : b inding s t y l e="document "

t ranspor t="http :// schemas . xmlsoap . org / soap/http "xmlns : wsdlsoap="http :// schemas . xmlsoap . org /wsdl / soap/"/>. . .

</wsdl : binding>

• The command line to run independently the WSOTF:

java -jar wsotf.jar -i inputfile -o outputfile -n tracenumber

A.1.1 TEFSM designerThe TEFSM designer is developed to create the input format for WSOTF engine by a graphic

interface. This tool allows us to visualize the test results by the trees or the paths on the graph formatof TEFSM. Because this tool is developed to design the TEFSM for the web services, so we can onlyadd the properties on the transition after indicating which is the SUT and its partners via its WSDLs.Figure A.1 shows the global interface of this tool, figures A.2 and A.3 illustrate how to show the testresults by the tree and the paths on the graph.

107

Page 122: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

108 Appendix A. The tutorial of the tools

Figure A.1: The main GUI of TEFSM Designer

Page 123: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

A.1. WSOTF 109

Figure A.2: The visualization of the test results by the tree

Figure A.3: The graphic visualization of the test results

Page 124: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

110 Appendix A. The tutorial of the tools

A.2 RV4WSTo easily define the rules and visualize the checking analysis, we have been developed a graphic

user interface (GUI). This GUI allows us to create a new project, define the rules of project, visualizethe results, run and pause the checking analysis. Before running this tool, we need to configure theport that its TCP server component will handle to receive the messages from the trace collectionengine (i.e., the proxy in our cases). This configuration is saved in the rv4ws.properties file.

A.2.1 Graphical interfaceWe present here some main dialogs of the tool. Figure A.4 shows the main GUI of RV4WS where

it is composed of three parts: a project explorer, a checking analysis, a test results that is composedof a verdict, an occurrence time, message name and the extract of its content.

Page 125: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

A.2. RV4WS 111

Figure A.4: The main GUI of RV4WS

Page 126: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

112 Appendix A. The tutorial of the tools

Figure A.5: Dialog to define a rule

Figure A.6: Dialog to define the properties (data correlation)

Page 127: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

A.3. TGSE 113

A.2.2 ProxyThe command line to run the proxy is: java -jar JProxy.jar -p port -fs fwdserver -fp fwdport -cf

config, where the config is a xml file with format as following:

<?xml ver s i on ="1.0" encoding="UTF−8"?><con f ig >

<!−− l i s t endpoints mapping to s e r v i c e name −−><endpoints>

<endpoint servicename = "amazonuk">http : / /147 . 2 10 . 1 29 . 5 0 : 8 888/ AmazonUKService/AmazonUK

</endpoint><endpoint servicename = " currencyexchange">

http : / /147 . 2 10 . 1 29 . 5 0 : 8 888/ CurrencyExchangeService /CurrencyExchange</endpoint><endpoint servicename = " amazonfr">

http : / /147 . 2 10 . 1 29 . 5 0 : 8 888/ AmazonFRService/AmazonFR</endpoint><endpoint servicename = " purchase">

http : / /147 . 2 10 . 1 29 . 5 0 : 8 888/ PurchaseServ i ceServ i ce /PurchaseServ ice</endpoint>

</endpoints>

<!−− port and se rve r name o f RV4WS’ s TCP server component −−><checkingengine>

<check ingeng ineport >9090</check ingeng ineport><checkingenginename>lo c a l ho s t </checkingenginename>

</checkingengine></con f i g >

A.3 TGSEThe TGSE only supports integer and boolean data types. So all variables that appear on transition

conditions will be mapped to integer to evaluate. An ETIOA in TGSE is described by: number ofstate, an initial state, a list of clock variables and a list of transition. Each transition t is composedof:

1. source_state(id, name);

2. target_state(id, name);

3. event (nop denotes an internal event);

4. guard condition on clocks (# denotes true);

5. guard condition on variable (# denotes true);

6. reset clocks (# denotes empty);

7. update variables (# denotes empty);

Following the ETIOA of xLoan that is modified from TEFSM in figure A.1 where the risk andresponse variables are mapped to integer type instead of the string type. (risk = 1⇔ risk = high, risk= 0⇔ risk !=high, response = 1⇔ response = ’accept’ and response = 0⇔ response != ’accept’). Be-cause the variables req_amount, risk and response are evaluated by the transition conditions and theydepend on input data, we declare three correspondent parameters: p_req_amount, p_check_respand p_xloan_resp.

−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−P_AUTO xLoan{nb_states : 15

Page 128: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

114 Appendix A. The tutorial of the tools

i n i t i a l _ s t a t e : 0

(0 , i n i t ) , (1 , s1 ) , ? xloan_request , #, #, #, req_amount=p_req_amount(1 , s1 ) , (3 , s3 ) , nop , #, req_amount [ 1 , 1 0 000 ] , #, #(1 , s1 ) , (2 , s2 ) , nop , #, req_amount ]10000 ,+ i n f ] , #, #(2 , s2 ) , (5 , s5 ) , ! assess_check_request , #, #, #, #(3 , s3 ) , (4 , s4 ) , ! bank_approve_request , #, #, #, #(4 , s4 ) , (7 , s7 ) , ?bank_approve_response , #, #, #, #(5 , s5 ) , (6 , s6 ) , ? assess_check_response , #, #, #, r i s k=p_check_resp(6 , s6 ) , (7 , s7 ) , nop , #, r i s k [ 1 , 1 ] , #, #(6 , s6 ) , (3 , s3 ) , nop , #, r i s k [ 0 , 0 ] , #, #(7 , s7 ) , (8 , s8 ) , ! xloan_response , #, #, #, response=p_xloan_resp(8 , s8 ) , (9 , s9 ) , nop , #, response [ 1 , 1 ] , h:=t , #(8 , s8 ) , (14 , s14 ) , nop , #, response [ 0 , 0 ] , #, #(9 , s9 ) , (10 , s10 ) , nop , t [ 6 0 , 6 0 ] , #, #, #(9 , s9 ) , (11 , s11 ) , ? xloan_confirm , t [ 0 , 6 0 [ , #, #, #(10 , s10 ) , (12 , s12 ) , ! bank_cancel , #, #, #, #(11 , s11 ) , (13 , s13 ) , ! bank_confirm , #, #, #, #(13 , s13 ) , (14 , s14 ) , nop , #, #, #, #(12 , s12 ) , (14 , s14 ) , nop , #, #, #, #}−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−

Test purpose: The xLoan process is initiated by receiving an input request from the client. Next,it sends an approval request to the bank service. The client continues send a confirm message to xLoanprocess at time t=20. Finally, the bank service receives a confirm request from the xLoan process.The test purpose for this scenario is formulated in TGSE as following:

−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−TESTER t e s t{nb_states = 5i n i t i a l _ s t a t e = 1f i n a l_ s t a t e = 5

(1 , i n i t ) , (2 , s2 ) , ! xloan_request , #, #, #, #(2 , s2 ) , (3 , s3 ) , ?bank_approve_request , #, #, #, #(3 , s3 ) , (4 , s4 ) , ! xloan_confirm , t [ 2 0 , 2 0 ] , #, #, #(4 , s4 ) , (5 , f i n i s h ) , ?bank_confirm , # , #, #, #}−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−

A Communicating System Under Test (CSUT) is declared as following:

−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−SYSTEM xLoan

nb_automatons : 1nb_rules : 4parameters : p_req_amount p_check_resp p_xloan_respc l o ck s : t

automaton_f i les : [ xLoan/ xloan . aut ]t e s t e r_ f i l e : xLoan/ t e s t e r . aut

RULES<?xloan_request , ! xloan_request><!bank_approve_request , ?bank_approve_request><?xloan_confirm , ! xloan_confirm><!bank_confirm , ?bank_confirm>−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−

Page 129: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

A.4. BPELUnit 115

Using TGSE to simulate this CSUT, we have the following time test case. In our case, we focuson gray-box testing, it means that we cover only the input/output events or delay action. We are notinterested in the internal events. So, from TGSE result, we will drop internal actions.

−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−1 . ? xloan_request (p_req_amount=1)2 . ! bank_approve_request3 . ?bank_approve_response4 . ! xloan_response ( p_xloan_resp=1)5 . ? xloan_confirm6 . ! bank_confirm−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−

A.4 BPELUnitBPELUnit [34] is an open source tool to execute the concrete timed test cases for white-box testing.

It allows us to:

• send a request with time delay and receive a response;

• verify the content of response message;

• simulate the partner services;

This tool uses a distributed test architecture where there is no the synchronization between thesimulation track partners to verify the order of messages or data correlation between them. To executea test case, the client track and all partner tracks will be started to perform its definition activities. If atrack that reports unsuccessfully exists, a fail verdict will be assigned. Else, a pass verdict is produced.For each partner track, it supports six activity types: (i) send asynchronous, (ii) receive asynchronous,(iii) send/receive synchronous, (iv) receive/send synchronous, (v) send/receive asynchronous, (vi)receive/send asynchronous.

Figure A.7: Illustration of BPELUnit’s test activities

Page 130: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

116 Appendix A. The tutorial of the tools

Figure A.7 shows an example of BPELUnit’s test activities. This tool does not allows us toautomatically derive the test case. So, it supports a graphic user interface (GUI) to manually definethe test cases. Figure A.8 shows the main GUI of BPELUnit where we can edit a test suite, select aSUT, add the partners, create a new test case and define all activities for each partner.

Figure A.8: The main GUI of BPELUnit

Figure A.9 presents an edition interface of Send/Receive Synchronous of the client track wherewe can select a port name, an operation and define the literal XML data (the SOAP template is notautomatically generated) to send to SUT. And figure A.10 presents the test execution result with apass verdict.

Like the WSOTF, we need to change the port of the partners by BPELUnit’s port handle (BaseURL) before deploying the SUT.

Page 131: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

A.4. BPELUnit 117

Figure A.9: Edition of Send/Receive Synchronous of BPELUnit

Figure A.10: Test case execution of BPELUnit

Page 132: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

Appendix B

Specification of xLoan web service

B.1 WSDL file<?xml ver s i on ="1.0" encoding="UTF−8"?><wsdl : d e f i n i t i o n s name="xloan "

targetNamespace="http : / f r . webmov . l a b r i / xloan /"xmlns : apachesoap="http :// xml . apache . org /xml−soap "xmlns : impl="http : / f r . webmov . l a b r i / xloan /"xmlns : wsdl="http :// schemas . xmlsoap . org /wsdl /"xmlns : wsdlsoap="http :// schemas . xmlsoap . org /wsdl / soap /"xmlns : xsd="http ://www.w3 . org /2001/XMLSchema">

<wsdl : types><schema targetNamespace="http : / f r . webmov . l a b r i / xloan /"

xmlns="http ://www.w3 . org /2001/XMLSchema"><complexType name="LoanInfo">

<sequence><element name="id " type="xsd : i n t "/><element name="name" type="xsd : s t r i n g "/><element name="income " type="xsd : i n t "/><element name="amount" type="xsd : i n t "/><element name="maxpayment" type="xsd : i n t "/><element name="maxmonth" type="xsd : i n t "/>

</sequence></complexType>

<element name="reque s t In f o " type="impl : LoanInfo "/><element name="requestRes " type="xsd : s t r i n g "/><element name="conf irmIn " >

<complexType><sequence>

<element name="id " type="xsd : i n t "/></sequence>

</complexType></element>

</schema></wsdl : types>

<wsdl : message name="requestRequest"><wsdl : part element="impl : r eque s t In f o " name="reque s t In f o "/>

</wsdl : message>

<wsdl : message name="requestResponse"><wsdl : part element="impl : requestRes " name="requestRes "/>

</wsdl : message>

<wsdl : message name="xLoanConfirmRequest"><wsdl : part element="impl : conf i rmIn " name="conf i rmIn "/>

118

Page 133: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

B.2. BPEL code 119

</wsdl : message>

<wsdl : portType name="xLoanPT"><wsdl : operat ion name="reques t " parameterOrder="approveInfo">

<wsdl : input message="impl : requestRequest "name="requestRequest "/>

<wsdl : output message="impl : requestResponse "name="requestResponse "/>

</wsdl : operat ion>

<wsdl : operat ion name="conf irm " parameterOrder="conf irmIn"><wsdl : input message="impl : xLoanConfirmRequest "

name="conf irmRequestIn "/></wsdl : operat ion>

</wsdl : portType>

<wsdl : b inding name="xLoanBinding " type="impl : xLoanPT"><wsdlsoap : b inding s t y l e="document "

t ranspor t="http :// schemas . xmlsoap . org / soap/http "xmlns : wsdlsoap="http :// schemas . xmlsoap . org /wsdl / soap/"/>

<wsdl : operat ion name="reques t "><wsdlsoap : operat ion soapAction="" s t y l e="document "

xmlns : wsdlsoap="http :// schemas . xmlsoap . org /wsdl / soap/"/><wsdl : input>

<wsdlsoap : body use=" l i t e r a l "xmlns : wsdlsoap="http :// schemas . xmlsoap . org /wsdl / soap/"/>

</wsdl : input>

<wsdl : output><wsdlsoap : body use=" l i t e r a l "

xmlns : wsdlsoap="http :// schemas . xmlsoap . org /wsdl / soap/"/></wsdl : output>

</wsdl : operat ion>

<wsdl : operat ion name="conf irm"><wsdlsoap : operat ion soapAction="" s t y l e="document "

xmlns : wsdlsoap="http :// schemas . xmlsoap . org /wsdl / soap/"/><wsdl : input>

<wsdlsoap : body use=" l i t e r a l "xmlns : wsdlsoap="http :// schemas . xmlsoap . org /wsdl / soap/"/>

</wsdl : input></wsdl : operat ion>

</wsdl : binding>

<wsdl : s e r v i c e name="xLoan"><wsdl : port b inding="impl : xLoanBinding " name="xLoanPort">

<wsdlsoap : addressl o c a t i o n="http :// l o c a l h o s t :8080/ act ive −bpel / s e r v i c e s /xLoan"xmlns : wsdlsoap="http :// schemas . xmlsoap . org /wsdl / soap/"/>

</wsdl : port></wsdl : s e rv i c e >

</wsdl : d e f i n i t i o n s >

B.2 BPEL code<?xml ver s i on ="1.0" encoding="UTF−8"?><bpel : p roce s s name="xLoan "

xmlns : bpel="http :// docs . oa s i s −open . org /wsbpel /2 .0/ proce s s / executab le "xmlns : impl="http : / f r . webmov . l a b r i / xloan /"xmlns : xsd="http ://www.w3 . org /2001/XMLSchema"xmlns : impl1="http : / f r . webmov . l a b r i / assessment /"xmlns : impl2="http : / f r . webmov . l a b r i /bank/"

Page 134: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

120 Appendix B. Specification of xLoan web service

suppre s sJo inFa i l u r e="yes "targetNamespace="http :// xLoan">

<bpel : extens ions ><bpel : ex tens ion mustUnderstand="no" namespace="mustUnderstand"/>

</bpel : extens ions >

<bpel : import importType="http :// schemas . xmlsoap . org /wsdl /"l o c a t i o n =" . ./ wsdl /bank . wsdl " namespace="http : / f r . webmov . l a b r i /bank/"/>

<bpel : import importType="http :// schemas . xmlsoap . org /wsdl /"l o c a t i o n =" . ./ wsdl / assessment . wsdl "namespace="http : / f r . webmov . l a b r i / assessment /"/>

<bpel : import importType="http :// schemas . xmlsoap . org /wsdl /"l o c a t i o n =" . ./ wsdl / xloan . wsdl " namespace="http : / f r . webmov . l a b r i / xloan/"/>

<bpel : partnerLinks><bpel : partnerLink myRole="LoanProvider " name=" c l i e n t "

partnerLinkType="impl : LoanPLT"/><bpel : partnerLink myRole="AssRequest " name="as s "

partnerLinkType="impl : Assessment " partnerRole="AssProvider "/><bpel : partnerLink myRole="AppRequest " name="ban"

partnerLinkType="impl : Bank" partnerRole="AppProvider"/></bpel : partnerLinks><bpel : v a r i ab l e s >

<bpel : v a r i ab l e element="impl : r eque s t In f o " name="reque s t In f o "/><bpel : v a r i ab l e element="impl : requestRes " name="requestRes "/><bpel : v a r i ab l e element="impl1 : checkIn fo " name="checkIn fo "/><bpel : v a r i ab l e element="impl1 : checkRes " name="checkRes"/><bpel : v a r i ab l e element="impl2 : approveInfo " name="approveInfo "/><bpel : v a r i ab l e messageType="impl : xLoanConfirmRequest "

name="c l ientConf i rmRequest "/><bpel : v a r i ab l e messageType="impl2 : confirmRequest "

name="bankConfirmRequest"/><bpel : v a r i ab l e messageType="impl2 : cance lRequest " name="cance lRequest "/><bpel : v a r i ab l e messageType="impl2 : approveResponse " name="approveRes"/>

</bpel : v a r i ab l e s ><bpel : c o r r e l a t i onSe t s >

<bpel : c o r r e l a t i o nS e t name="CS1" p rope r t i e s="impl : cusIdProperty "/></bpel : c o r r e l a t i o nSe t s >

<bpel : f low><bpel : l i nk s >

<bpel : l i n k name="L8"/><bpel : l i n k name="L7"/><bpel : l i n k name="L2"/><bpel : l i n k name="L3"/><bpel : l i n k name="L1"/><bpel : l i n k name="L9"/><bpel : l i n k name="L4"/><bpel : l i n k name="L5"/><bpel : l i n k name="L6"/>

</bpel : l i nk s ><bpel : a s s ign ><bpel : t a rge t s >

<bpel : t a rg e t linkName="L6"/></bpel : t a rge t s ><bpel : sources>

<bpel : source linkName="L8"/></bpel : sources><bpel : copy>

<bpel : from part="checkRes " v a r i ab l e="approveRes"/><bpel : to v a r i ab l e="requestRes "/>

</bpel : copy></bpel : a s s ign >

<bpel : a s s ign ><bpel : t a rge t s ><bpel : t a rg e t linkName="L5"/>

Page 135: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

B.2. BPEL code 121

</bpel : t a rge t s ><bpel : sources>

<bpel : source linkName="L7"/></bpel : sources><bpel : copy><bpel : from>’ r e j e c t ’</ bpel : from><bpel : to v a r i ab l e="requestRes "/>

</bpel : copy></bpel : a s s ign ><bpel : a s s ign >

<bpel : t a rge t s ><bpel : t a rg e t linkName="L1"/>

</bpel : t a rge t s ><bpel : sources>

<bpel : source linkName="L2"><bpel : t r an s i t i onCond i t i on >$reque s t In f o /amount &gt ; 10000

</bpel : t r an s i t i onCond i t i on ></bpel : source><bpel : source linkName="L3">

<bpel : t r an s i t i onCond i t i on >$reque s t In f o /amount &l t ;= 10000</bpel : t r an s i t i onCond i t i on >

</bpel : source></bpel : sources><bpel : copy>

<bpel : from va r i ab l e=" r eque s t In f o "/><bpel : to v a r i ab l e="checkIn fo "/>

</bpel : copy><bpel : copy>

<bpel : from va r i ab l e=" r eque s t In f o "/><bpel : to v a r i ab l e="approveInfo "/>

</bpel : copy></bpel : a s s ign ><bpel : r e c e i v e c r e a t e In s t ance="yes " operat ion="reques t "

partnerLink=" c l i e n t " v a r i ab l e=" r eque s t In f o "><bpel : sources>

<bpel : source linkName="L1"/></bpel : sources><bpel : c o r r e l a t i o n s >

<bpel : c o r r e l a t i o n i n i t i a t e ="yes " s e t="CS1"/></bpel : c o r r e l a t i o n s >

</bpel : r ece ive ><bpel : r ep ly operat ion="reques t " partnerLink=" c l i e n t "

v a r i ab l e="requestRes"><bpel : t a rge t s >

<bpel : t a rg e t linkName="L8"/><bpel : t a rg e t linkName="L7"/>

</bpel : t a rge t s ><bpel : sources>

<bpel : source linkName="L9"><bpel : t r an s i t i onCond i t i on >$requestRes = ’ accept ’

</bpel : t r an s i t i onCond i t i on ></bpel : source>

</bpel : sources></bpel : reply><bpel : invoke inputVar iab le="checkIn fo " operat ion="check "

outputVar iab le="checkRes " partnerLink="as s "><bpel : t a rge t s >

<bpel : t a rg e t linkName="L2"/></bpel : t a rge t s ><bpel : sources>

<bpel : source linkName="L4"><bpel : t r an s i t i onCond i t i on >$checkRes = ’ low ’

</bpel : t r an s i t i onCond i t i on ></bpel : source><bpel : source linkName="L5">

<bpel : t r an s i t i onCond i t i on >$checkRes = ’ high ’

Page 136: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

122 Appendix B. Specification of xLoan web service

</bpel : t r an s i t i onCond i t i on ></bpel : source>

</bpel : sources></bpel : invoke><bpel : invoke inputVar iab le="approveInfo " operat ion="approve "

outputVar iab le="approveRes " partnerLink="ban"><bpel : t a rge t s >

<bpel : t a rg e t linkName="L3"/><bpel : t a rg e t linkName="L4"/>

</bpel : t a rge t s ><bpel : sources>

<bpel : source linkName="L6"/></bpel : sources>

</bpel : invoke><bpel : pick>

<bpel : t a rge t s ><bpel : t a rg e t linkName="L9"/>

</bpel : t a rge t s ><bpel : onMessage operat ion="conf irm " partnerLink=" c l i e n t "

v a r i ab l e="c l ientConf i rmRequest "><bpel : c o r r e l a t i o n s >

<bpel : c o r r e l a t i o n i n i t i a t e ="no" s e t="CS1"/></bpel : c o r r e l a t i o n s >

<bpel : sequence><bpel : a s s ign >

<bpel : copy><bpel : from part="conf irmIn " va r i ab l e="c l ientConf i rmRequest "/><bpel : to part="conf irmIn " va r i ab l e="bankConfirmRequest"/>

</bpel : copy></bpel : a s s ign ><bpel : invoke inputVar iab le="bankConfirmRequest "

operat ion="conf irm " partnerLink="ban"/></bpel : sequence>

</bpel : onMessage><bpel : onAlarm>

<bpel : for >’P0Y0M0DT0H0M60S’</ bpel : for><bpel : sequence>

<bpel : a s s ign ><bpel : copy>

<bpel : from va r i ab l e=" r eque s t In f o "><bpel : query>id</bpel : query>

</bpel : from><bpel : to part="cance l In " v a r i ab l e="cance lRequest "/>

</bpel : copy></bpel : a s s ign ><bpel : invoke inputVar iab le="cance lRequest " operat ion="cance l "

partnerLink="ban"/></bpel : sequence>

</bpel : onAlarm></bpel : pick>

</bpel : f low></bpel : process>

Page 137: TestandValidationofWebServices - u-bordeaux.frori-oai.u-bordeaux1.fr/pdf/2010/CAO_TIEN_DUNG_2010.pdf · TestandValidationofWebServices presentedby Ti´ênD ungCao˜ ... Au debut,

Index

Abstract timed test case, 45

Communicating System Under Test, 52Conformance relation

ioco, 19tioco, 20xtioco, 43

Feasible path, 51Formal model

aDFA, 26BFG, 28ETIOA, 52FSP, 28Guard Automata, 26Petri nets, 27STS, 27TEFSM, 37WSA, 27WSTTS, 26

Offline testing, 44Online testing, 57

Synchronous product, 49

Test purpose, 45Test types

AccessibilityBlack-box, 19Gray-box, 19White-box, 19

CharacteristicsConformance, 18Performance, 18Reliability, 19Robustness, 18Security, 18

ControllabilityActive, 19Passive, 19

PhasesIntegrated, 19System, 19Unit, 19

ToolsBPELUnit, 108Jambition, 29PLASTIC, 29RV4WS, 82SOAPUI, 29T-Uppaal, 25TestGen_IF, 24TestMaker, 29TGSE, 24TGV, 24TorX, 25WS-AT, 29WS-TAXI, 29WSOTF, 77

Web service, 2BPEL, 14assign, 16catch, 17catchAll, 17compensationHandler, 17empty, 16eventHandlers, 17exit, 16faultHandlers, 17flow, 17if, 16invoke, 16onAlarm, 17onEvent, 17pick, 16receive, 16repeatUntil, 16reply, 16scope, 17sequence, 16throw, 16wait, 16while, 16

SOAP, 12UDDI, 13WSDL, 12

123