Comparing Test Design Techniques for Open Source...

85
Comparing Test Design Techniques for Open Source Systems Guido Di Campli Mälardalen University, Sweden +393209016130 [email protected] Savino Ordine Mälardalen University, Sweden +393283895057 [email protected] Supervisor: Sigrid Eldh Ericsson AB, Sweden +46107152374 [email protected] Examiner: Sasikumar Punnekkat Mälardalen University, Sweden +4621107324 [email protected] 12th November 2009

Transcript of Comparing Test Design Techniques for Open Source...

Page 1: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques

for Open Source Systems

Guido Di Campli Mälardalen University, Sweden

+393209016130 [email protected]

Savino Ordine

Mälardalen University, Sweden +393283895057

[email protected]

Supervisor: Sigrid Eldh Ericsson AB, Sweden

+46107152374 [email protected]

Examiner: Sasikumar Punnekkat Mälardalen University, Sweden

+4621107324 [email protected]

12th November 2009

Page 2: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

2

Page 3: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

3

Abstract

In this thesis we describe how to systematically test, where our target has been Open Source Systems. We have applied a series of common and overlapped test design techniques at defined levels, specifically using seven different functional and structural test approaches. Our conclusion is that open source systems often lack fundamental testing, where on average it only takes 6 test cases to reveal the first failure. The first time to failure is 1 hour on average and MTTF (mean time between failures) is approximately 2 hours with our systematic approach. Our systematic approach is not only testing in itself, but we do also describe the process of discovering a system’s requirements. We have also found that some test design techniques seem to be more effective than others to find failures. We have investigated fifteen different open source systems, attempting to classify these systems in a methodical way. Our process consists in measuring time spent to identify unique part of the system where to apply the test cases. We consider both the system and the test design technique as measures to evaluate the effectiveness and construct test cases.

Page 4: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

4

Page 5: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

5

Acknowledgements

My first thanks are for my family, always present in my life and ready to give me the right support during difficulties. I cannot ask for a better family. Today I am here cause of you… Really thanks from the deepest part of my heart.

Special thanks are for Sigrid Eldh. We found a mentor, a friend and a good adviser. I cannot no mention my best friends or ever: Francesco D’Onofrio and Giulia Trivilini.

I always look at you as my private small family and I’m really lucky in having you close to me.

A special thought to my missing grandmothers: Iolanda Corrispresto and Speranza

Liberatore. You are close to heart and your teaching will be with me until the end of time. I would like to catch this opportunity and say thanks to Iolanda Di Campli (my lovely

little sister), Ordine Savino, all friends from GSEEM, Henry Muccini and all friends of ever. My last thank you is for Joanne (my sweet Giulia) to let me understand that sometimes

dreams became reality. Guido.

In Italiano: Il mio primo ringraziamento va alla mia famiglia che è sempre stata presente nella mia

vita ed è sempre stata pronta a supportarmi nei momenti difficili. Non potrei mai avere genitori migliori di voi. Se oggi sono qui lo devo a voi….. Grazie dal più profondo del mio cuore.

Uno speciale grazie va Sigrid Eldh. In lei abbiamo trovato un punto di riferimento ,

un’amica ed una buona consigliera. Non posso non menzionare i migliori amici di sempre: Francesco D’Onofrio e Giulia

Trivilini. Guardo a voi come se fosse la mia privata, piccola famiglia e sono veramente fortunato ad avervi accanto.

Uno speciale pensiero va alle mie nonne che son venute a mancare durante questo Master: Iolanda Corripresto e Speranza Liberatore. Rimarrete sempre nel mio cuore ed i vostri insegnamenti resteranno con me sino alla fine dei tempi.

Colgo l’occasione per ringraziare Iolanda Di Campli (la mia adorabile sorellina), Ordine Savino, i ragazzi GSEEM, Henry Muccini e tutti gli amici di sempre.

Il mio ultimo grazie va a Joanne (la mia dolce Giulia) per avermi fatto capire che a

volte i sogni possono divenire realtà.

Grazie a tutti di cuore, Guido.

Page 6: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

6

My first thank is to my family for having spent with me bad and beautiful moments and

to give me support for everything. Thank so much mom, dad and my little sister. I also would like to say thank to Sigrid Eldh for guiding me through this Master Thesis

and for all the meetings and presentations reviewed by her. Last but not least thank to my friend Guido Di Campli for shared ideas and time, thank

to Henry Muccini for giving us all the support we needed and a thank to Wen Chang to be always close to me and make my life wonderful.

Thank you Västerås!

Savino.

Prima di tutto vorrei ringraziare la mia famiglia per aver trascorso con me brutti e bei momenti e per avermi dato tutto il supporto che avevo bisogno e per aver creduto in me in questo cammino all’estero. Mi ricorderò sempre tutti gli sforzi che avete fatto per me. Grazie mille Mamma, Papà e la mia piccola Sister.

Vorrei anche ringraziare il mio supervisore Sigrid Eldh per avemi aiutato e guidato in

questa tesi. Un ultimo ringraziamento è per il mio amico Guido Di Campli per aver condiviso idee

e tempo, un grazie per Henry Muccini per averci dato tutto il supporto che avevamo bisogno e un grazie a Wen Chang per essermi stata sempre vicina e per aver reso la mia vita meravigliosa.

Grazie!

Savino.

Page 7: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

7

SUMMARY

1. Introduction ......................................................................................................................................................................... 10

2. Background ............................................................................................................................................................................... 12

2.1 Motivation .......................................................................................................................................................................... 12

2.2 Terminology ...................................................................................................................................................................... 12

2.3 Method section ................................................................................................................................................................. 13

3. Test design Techniques ........................................................................................................................................................ 15

3.1 Positive Testing ................................................................................................................................................................ 17

3.2 Negative Testing .............................................................................................................................................................. 18

3.3 Boundary Value Analysis ............................................................................................................................................. 20

3.4 Equivalence Partitioning .............................................................................................................................................. 21

3.5 Random Input ................................................................................................................................................................... 21

3.6 Fault Injection ................................................................................................................................................................... 23

3.7 Exploratory Testing ....................................................................................................................................................... 23

4. Open Source System ......................................................................................................................................................... 24

4.1 Open Source Selection Criterions ............................................................................................................................. 25

4.2 Open source used ............................................................................................................................................................ 26

5. Systematic use of Test Design Techniques .................................................................................................................. 27

5.1 Motivation about systematic routine ...................................................................................................................... 27

5.2 Systematic Test case creation .................................................................................................................................... 30

5.3 Test techniques used in experiments ..................................................................................................................... 31

5.4 Level of test used in experiment ............................................................................................................................... 32

5.5 Selection of Input Values .............................................................................................................................................. 34

5.6 Test cases creation ......................................................................................................................................................... 35

5.7 Test Case Table. ............................................................................................................................................................... 38

5.8 Comparison between Systems ................................................................................................................................... 39

6. Case Study .................................................................................................................................................................................. 40

6.1 Study of the system ........................................................................................................................................................ 40

6.2 practical usage of Test Design Techniques ........................................................................................................... 41

6.3 Code Coverage Results .................................................................................................................................................. 49

6.4 Case Study Results .......................................................................................................................................................... 50

7. Results ......................................................................................................................................................................................... 52

7.1 Test techniques ................................................................................................................................................................ 52

7.2 Open source system’s coverage ................................................................................................................................ 53

7.3 Open Source Installing Problems ............................................................................................................................. 54

7.4 Systems Classification ................................................................................................................................................... 54

Page 8: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

8

7.5 Test techniques classification .................................................................................................................................... 57

7.6 Statistic results ................................................................................................................................................................. 58

8. Future work and Conclusion .............................................................................................................................................. 64

A. Appendix “The Hat” .......................................................................................................................................................... 65

B. Appendix “Case Study” .................................................................................................................................................... 66

References ...................................................................................................................................................................................... 82

IMAGES INDEX

FIGURE 1. BVA PARTITIONING ..................................................................................................................................................... 16

FIGURE 2. NEGATIVE TESTING IS NOT OBVIOUS WITHOUT SPECIFICATION ........................................................................... 18

FIGURE 3. BVA IS AIMING TO CHECK THE BORDERLINE BETWEEN EQUIVALENCE PARTITIONING FOR THIS SET........... 20

FIGURE 4: THE HAT INTERFACE .................................................................................................................................................... 22

FIGURE 5. OUR FIRST APPROACH TO TESTING ........................................................................................................................... 28

FIGURE 6. GUI EXAMPLE................................................................................................................................................................ 31

FIGURE 7. V-MODEL ........................................................................................................................................................................ 32

FIGURE 8. GENERIC COMPARISON TABLE .................................................................................................................................... 39

FIGURE 9. BANKSYSTEM SCREENSHOT ........................................................................................................................................ 40

FIGURE 10. SELECTED SOURCE CODE IN BKSFIJC21 ............................................................................................................... 47

FIGURE 11. AFFECTED SOURCE CODE IN BKSFIJC21 ............................................................................................................... 47

FIGURE 12. RESULT FROM BKSFIJC21 AT SYSTEM LEVEL ..................................................................................................... 47

FIGURE 13 CODE COVERAGE BKSFIJC21.................................................................................................................................. 48

FIGURE 14. COMPLETE CODE COVERAGE OF BANK SYSTEM .................................................................................................... 50

FIGURE 15. BEJEWELED SCREENSHOT ......................................................................................................................................... 52

FIGURE 16. HOW TO READ SYSTEMS RESULTS TABLE ............................................................................................................... 54

FIGURE 17. TEST TECHNIQUES GRAPH ......................................................................................................................................... 57

FIGURE 18. THE HAT'S GUI ........................................................................................................................................................... 65

FIGURE 19. SHORTCUT KEYS FOR MENU ..................................................................................................................................... 68

FIGURE 20. SHORTCUT KEYS FOR OPTION .................................................................................................................................. 68

FIGURE 21. SELECTED SOURCE CODE IN BKSFIJC20 ............................................................................................................... 78

FIGURE 22. AFFECTED SOURCE CODE IN BKSFIJC20 ............................................................................................................... 78

FIGURE 23. CODE COVERAGE BKSFIJC20 ................................................................................................................................. 79

FIGURE 24. SELECTED SOURCE CODE IN BKSFIJC21 ............................................................................................................... 79

FIGURE 25. AFFECTED SOURCE CODE IN BKSFIJC21 ............................................................................................................... 80

FIGURE 26. RESULT FROM BKSFIJC21 AT SYSTEM LEVEL ..................................................................................................... 80

FIGURE 27. CODE COVERAGE BKSFIJC21................................................................................................................................. 81

Page 9: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

9

TABLE INDEX

TABLE 1. ASCII TABLE ................................................................................................................................................................... 19

TABLE 2. SYSTEMS USED IN THIS EXPERIMENT .......................................................................................................................... 26

TABLE 3. TESTING / LEVEL ............................................................................................................................................................ 33

TABLE 4. ASCII TABLE................................................................................................................................................................. 34

TABLE 5. ASCII RANGE VALUES .................................................................................................................................................... 35

TABLE 6. NORMAL AND NEGATIVE RANGE VALUES ................................................................................................................... 36

TABLE 7. BVA RANGE VALUES ........................................................................................................................................................ 37

TABLE 8. FAULT INJECTION TABLE ............................................................................................................................................... 37

TABLE 9.TEST DESIGN TECHNIQUES ............................................................................................................................................ 38

TABLE 10. BANK SYSTEM'S REQUIREMENTS ............................................................................................................................... 41

TABLE 11. BANK SYSTEM - NORMAL TCS ................................................................................................................................... 41

TABLE 12. BANK SYSTEM - NEGATIVE TCS ........................................................................................................................... 42

TABLE 13. BANK SYSTEM - RANDOM INPUT TCS.................................................................................................................. 43

TABLE 14. BANK SYSTEM - BVA TCS ..................................................................................................................................... 44

TABLE 15. BANK SYSTEM - EQ. PARTITIONING TCS ............................................................................................................. 45

TABLE 16. BANK SYSTEM - ERROR GUESSING TCS .................................................................................................................... 46

TABLE 17. SEARCH TCS - BANK SYSTEM .................................................................................................................................... 49

TABLE 18. BANK SYSTEM – TCS SUMMARY ................................................................................................................................ 50

TABLE 19. BANK SYSTEM RESULTS .............................................................................................................................................. 51

TABLE 20. SYSTEMS RESULS ......................................................................................................................................................... 55

TABLE 21. SYSTEM'S CLASSIFICATION ......................................................................................................................................... 56

TABLE 22. TEST TECHNIQUES CLASSIFICATION ......................................................................................................................... 57

TABLE 23. FAILURE AVERAGE. ....................................................................................................................................................... 58

TABLE 24. TCS AND TC FAILED FOR EACH TDT ........................................................................................................................ 61

TABLE 25. SYSTEM FAULTS DESCRIPTION ................................................................................................................................... 63

TABLE 26. BANK SYSTEM TCS TABLE ........................................................................................................................................... 77

Page 10: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

10

1. INTRODUCTION

Before to write this document, we attended a class called software verification and validation where we learned about test techniques, level of testing in a development process, what a test case is and how to be systematic in testing. All these notions gave us the basic knowledge to do this work. We spent almost 3 months to improve our knowledge about test techniques and learned how to be systematic and we participated at Ericsson in an experiment about testing an open source system. In this experiment, we use basic and common test techniques such as Normal, Negative, Random Input, Boundary Value Analysis, Equivalence Partitioning, Code coverage, Fault injection and Error guessing. We use the same test techniques in our work. These test design techniques are used to demonstrate that open source systems often lack fundamental testing. In a short time we can find faults or strange system behavior. When a new system is developed, different software development processes are used, such as Waterfall model, Spiral model or V-model.

We can create test cases not only when a system is completed but also at different steps of the development process such as during the requirement’s phase and we can execute them when the system is completed.

Why have we chosen our target to be open source systems? Open source systems by Andress section 1, page 14 [18] have different advantages, but they also may have disadvantages. Firstly, open source systems generally do not have single entity support the product. Many open source systems are hobbies of developers and for this reason they often lack fundamental testing. Secondly, many users feel open source system is difficult to use because system has been developed for the Linux platform, but nowadays most people use Windows and also open source systems are not well-documented, they have nothing more than a README file or manual page and it is hard to understand how to use them. Thirdly, with open source system, the code can be inspected and tested by anyone [44]. This last one is a good reason for our work because we can test open source systems using fault injection technique (for more detail on this technique, see section 3) and in this way we can create test cases also on code level (see Table 8). We use statement coverage to understand the percentage of coverage of an open source system after execution of a test case and create new test case with different coverage of code lines. We have noticed that few open source systems have requirement documentation, and if we don’t have requirements we cannot do testing, from Graham [57] “ ...Because requirements form the basis for tests, obviously we can’t do any testing until we have decent requirements… Some would say that without a specification, you are not testing but merely exploring the system…”, for this reason we have included in our systematic approach not only testing in itself, but

Page 11: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

11

we do also describe the process of re-engineering a system’s requirements (see section 5). By utilizing and Partitioning an ASCII table (see Fig.10), we can systematically create different ranges of input values for different system’s domains. This will help us creating test cases and in the end test the system which takes ASCII as input.

In our work, we apply several common and overlapping test techniques on a series of open source systems in regards of different test techniques at defined levels (see section 5.3). This thesis provides a thorough description of these test design techniques. Each technique is measured in different ways (see section 3 and 5) including statement coverage.

Page 12: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

12

2. BACKGROUND

This section is intended to give a short background and overview covered in this thesis. The main focus will be to give an introduction to motivate this work, which will be presented in 2.1 and terminology about test design techniques will be presented in 2.2.

2.1 MOTIVATION

Our motivation is to provide a better understanding of what is commonly tested and not tested in open source system, and at the same time use the opportunity to contribute to a better understanding of the applicability of these test design techniques (TDT). It is also showed a way to classify systems based on the time spent (in minutes) to identify unique part of the system where apply a Test Case (TC).

2.2 TERMINOLOGY

What do we mean with test of a system? What is testing? Testing is the process to evaluate a system in collusion with its specified requirements. Definition given in IEEE Standard Glossary of Software Engineering Terminology [29] defines testing as: the process of operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component.

In our approach, we also using code coverage (with code coverage we specifically mean statement coverage according to [1]) to determine which part of the software has been covered by existing test case suite and which part still not executed. This code covering is measured by the tool EclEMMA [3]. With the term effective we mean the ability of a test design technique to expose failures in the system.

According to Beizer [53] there are perhaps as many as 57 different varieties of test case design techniques and Copeland [2] has produced an excellent practical description of some of the most popular and useful techniques. A Test Design Technique (also called TDT or Test Technique) is from Broekman section 11, pages 113..116 [16] and Graham section 4, pages 77..80 [17] a procedure\method to derive and\or select test cases from reference information. TDT have to be:

� Applicable � Efficient � Effective

Applicable is a rather new aspect of a test design technique, and can be defined as the ability for the technique to be automated. Another aspect of applicability is

Page 13: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

13

defines the test design techniques being only valid in its specific context, and not necessarily possible to apply to other software and domains [43]]. Efficient is by Rothermel and Harrold[44] defined as the measurement of the computation cost, which also determines the practicality of a technique. It also include by Eldh [33] time required to comprehend and implement the test technique. Effective can be defined as the number of faults the technique will find [33] . Effectiveness of a test design technique is according to Weyuker [43], the only possible to measure if you can compare two techniques for the same set (i.e. software), but the result is not general.

Test Cases can be defined in many ways, one of them is from Schmidt [46] which defines a test case to consist of:

1. ID is the unique identifier of the test case. 2. SUT is the system under test or in another abstraction level is a part or a

service of the system under test. 3. Pre Condition is a set of assertions to ensure the prerequisites of exercising

the test case are satisfied. 4. Post Condition is a set of assertions to evaluate the correctness of

execution results. 5. IN is composed of name and value of input parameters to the operation. 6. OUT is composed of name and value or the output of the operation.

We have also found other descriptions of how a test case can be defined from Copeland [2] and Myers [47]. According to Kamde, Nandavadekar and Pawar [56] a test case must also have quality attributes such as:

• Correctness • Accurate and Appropriate • Economical • Repeatable • Traceable • Measurable

Correctness means that a test case has to be appropriate for the tester and environment. Accurate and appropriate means that tests what the description says it will test. Economical means that has only the steps needed for its purpose. Repeatable means same results no matter who tests. Traceable means to a requirement. Measurable returns the tested environment to clean state.

2.3 METHOD SECTION

We were four persons simultaneously involved in the work, but we have each independently completed our research on the test design techniques. Each person had a goal to learn more about testing because of their Master Thesis is in the area.

Page 14: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

14

We started with a personal literature study that was an extension and a better study of the content of Verification and Validation course at Mälardalen University.

We had different meetings with and without our supervisor to discuss about each technique and compare our thoughts, and we have documented the techniques below. Meeting with the supervisor includes a presentation about a tested system or about what we have done during the previous week.

In addition we remade a literature study just to clarify the new understanding. Two students were focused on Open Source System Testing and other two

students were both involved in Close Source System Testing but everybody was interested in Functional Testing. We were under the same supervisor. We get our Testing background from L’Aquila University (Italy) and Mälardalen University (Sweden) and we used them as a start-point to understand how to approach the literature. We constantly had meeting each week with supervisor to show our work, our progress and discuss together about TDT and thesis. Each meeting had as goal a power point presentation about the week’s progress for each group, a discussion all together and time for questions and clarification of doubts or mistakes. These meeting were really important because we discover the meaning of “Systematic Approach”. At beginning we used a personal Model Driven Approach but soon we understand that it is a weak method compared to systematic methods because with it we did not have a complete view of all possible input in a system. During each meeting generally we had a presentation about a tested system in the previous week and we use it as a start-point to discuss about testing, mistakes and possible improvements.

We even had meetings without our supervisor where we discussed deeply about TDT. We had good results and it was helpful to understand TDT and compare our thoughts.

In these meeting we understood that Positive Testing and Negative Testing are not real techniques under particular conditions.

We clarified the main differences between BVA (Boundary Value Analysis) and EP (Equivalent Partitioning). BVA works on extreme values and on numeric domain and EP needs Partitioning classes to start it. Inside a system without documentation is not always clear what is correct and what is not and for these reasons we need precondition and suppositions to make Positive Testing and Negative Testing. Discussing on Level of Testing we look that there are some TDT as Fault Injection that work at Code Level but show changes at System Level. All these sentences are explained in this section into the relative paragraphs.

Those meetings were really useful because they made us capable to start thinking about Testing with different eyes. Using meeting with supervisor once a week and with other students we were always comparing our point of view and discovering new possible scenario. After meetings we usually relook at literature and take notes about them to rearrange our knowledge and grow up.

Page 15: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

15

3. TEST DESIGN TECHNIQUES

In this section we are going to explain which TDT we used in our approach to test Open Source System. We selected a series of test design techniques, with the aim to identify which would be beneficial to use, to evaluate the quality of the Open Source System. At this point there is a question, are there some techniques [33] more effective than others? In our work we choose seven simple and common test techniques and they are Exploratory, Random Input, Normal, Negative, Boundary Value Analysis, Equivalent Partitioning, Code coverage, Fault injection and Error guessing.

We have chosen exploratory test [58] because not all the systems have requirement document and before to start testing, we play with the system by trying to execute, and give values to the system. It is defined in several ways, such as by Graham [57] “...exploratory testing, designed for situations with inadequate requirements...” by James Bach as “.. simultaneous learning, test design, and test execution” [45]. Tinkham and Kaner give a slightly different definition: “Any testing to the extent that the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests” [47]. According to Kaner, Bach, and Pettichord [46] exploring means “…purposeful wandering: navigating through a space with a general mission, but without a pre-scripted route. Exploration involves continuous learning and experimenting.” When we use exploratory, we are using it to explore the system functionality. Exploratory here means that we always are using another test design technique, which could have been stated as the main technique, except we have no validity source in form of any type of specification to strengthen the argument of using it. This is the weakest point in our thesis. Because of the lack of requirements or specifications in the chosen open source system, all approaches could be called exploratory. Instead, we make an assumption that we have an understanding of the system, thus we know what a correct outcome of a test case is, and hence can use a specific test design technique. We have chosen “normal” execution, which means that the main intention of the usage of a function is executed – which appears to be the “default” normal values expected. This test design technique has many names, such as positive, requirement or functional test from Watkins section 3, pages 19..22[11] and [45]. This technique “the normal case” is valuable since is not only demonstrates the actual program use, but to show that a particular feature, or functionality or the system as a whole is working properly. Negative testing from Watkins section 2, page 9[11] and [14] is a test design technique with the aim to insert unexpected values into the system, meaning – values that would trigger either fault handling or not be normally allowed as input. It is intended to demonstrate which aspects of the system are working. It is often used to test which aspects that have not been documented or have been poorly or

Page 16: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

16

incompletely documented in the specification, according to Watkins section 3 page 16 [11]. The goal is to evaluate if these “negative input values” are managed or not. Normal and negative test design techniques are commonly used in industry [11][14]. Random input testing from Craig and Jaskiel section 5 page 175 [1] and from Watkins section 3, page 21 [11] is a technique using an automated tool to insert random input values from a given or generated set of inputs. It contains all kinds of values (and formats) of input. Boundary Value Analysis [54][55] and from Burnstein section 4, pages from 72-73 [12] and from Copeland section 4, pages from 39-44 [2] is a test design technique, that when applied yields normally three test cases for each boundary. On each side of the boundary and on the actual boundary, in the figure 1 we have x-, x, x+ (lower bound, extreme and upper bound) and z-, z, z+ (lower bound, extreme and upper bound). This testing technique from Copeland section 4, pages 42-43 [2] is most appropriate where the input is a continuous range of values.

x Partitioning z

Boundary values

Figure 1. BVA Partitioning

It is not possible to include all the values of attributes from the domains in test cases, but the domain can be split in equivalence partitions and in this case from Pol, Teunissen and Van Veenendaal section 15, pages 201-202 [19] we use equivalence Partitioning[54][55] also called Equivalent Class from Copeland section 3, pages from 28-33 [2]. It is a testing technique used to reduce the number of test cases to a manageable level while still maintaining a high coverage of the system. Using these functional techniques we cover major test techniques handling input. For Structural approaches we have chosen Statement coverage by Burnstein section 5, pages from 101-108 [12] and [2]. Code statement coverage from Craig and Jaskiel section 5, pages 181-182[1]and [8][48][49][50] is used to understand which code has not been addressed after a TC (Test Case) execution and improve or create new test cases to cover undressed code. In our work, we used a simple and free tool to make code coverage on JAVA environment called EclEMMA, see [3]. This tool was selected because it is fast to develop/test, coverage results are immediately summarized and highlighted in the Java source code editors and it does not require modifying the projects or performing any other setup. In addition we approach the code level with a test design technique, called Fault Injection [51][52]. It is by Hsueh, Tsai and Iyer [59] "...In this technique, instructions are added to the target program that allows fault injection to occur before particular instructions, much like the code-modification method. Unlike code modification, code insertion performs fault injection during runtime and adds instructions rather than changing original instructions", and by Voas and McGraw “a useful tool in developing high quality, reliable code. Its ability to reveal how software systems behave under experimentally controlled anomalous circumstances makes it an

Page 17: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

17

ideal crystal ball for predicting how badly good software can behave.”[13]. The aim of this technique is to test codes and the database (is present) of a system by entering faults and checking if the fault is propagated in the code or if it is caught or not. We use Error Guessing [41][42], which is an ad hoc approach based on intuition and experience with the goal to identify critical test values, which will most likely reveal errors. Some people, from Myers and Glenford section 4, pages 88-89 [41], seem to be naturally adept at program testing, these people seem to have a knack for ‘smelling out’ errors by intuition and experience. The goal is to utilize negative, random input, boundary value analysis, Equivalence Partitioning, fault injection and error guessing to get a better overview of the interaction between components and see how the system respond when some faults are propagated in the code, instead of the normal use to evaluate that the test suite is complete.

We want to show some more definitions found on scientific papers and books and at the end show our conclusions on TDT. We even show an example of how to apply TDT.

The structure for each TDT description is as follows: • Citation of definition from scientific papers and books selected after a

selection on different ; • Problems and misunderstanding during the project; • Conclusions on literature; • A simple example on how to make TCs referring to TDT.

3.1 POSITIVE TESTING

“The process of Positive Testing is intended to verify that a system conforms to

its stated requirements. Typically, Test Cases will be designed by analysis of the Requirements Specification document for the Application Under Test (AUT).”[61 page 16]

During studying we ask ourselves if Positive Testing is a real technique or not.

The answer is “it depends on the context”. This sentence comes from different reflections. When we are testing a system and we have documents, it become easier understand what functional requirements are and what the system has to do. In this case we have a very strong assumption where to apply Positive Testing and what are the expected results. In a scenario without documentation became harder to understand functional requirements because we suppose that a feature is a part of system routine and we must use our intuitions as precondition to make testing. Positive Testing is not a math based technique.

Concluding Positive testing (or Normal Testing) means confirming the main requirements of the system when they are available. We provide as input only data that system domain expects to receive as valid input. Positive Testing without specifications is not an exact science, because without documentation we guess the intentions of the system and it is possible to miss e.g. computations results etc.

We want to show an example to understand what a TC for Positive Testing should be. Imagine a bank scenario where a system has to handle customer holder accounts and we miss system documentations. We have to suppose that during

Page 18: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

18

customer registration a field with the customer name accepts only characters and a field with address only alphanumeric values and so on. To make TC we must suppose that something is right into the system and use it as precondition and then make a TC on it or use a systematic method to understand the system. This could be considered “re-engineering” the requirement, with the aid of the test design technique.

3.2 NEGATIVE TESTING

“The process of Negative Testing is intended to demonstrate "that a system does not do what it is not supposed to do". That is, Test Cases are designed to investigate the behavior of the AUT (Application Under Test) outside the strict scope of the Requirements Specification. It is often used to tests aspects of the AUT that have not been documented or have been poorly or incompletely document in the specification.”[61], page 16.

The discussion about Negative Testing has similar properties as Positive

Testing. Negative Testing is obvious and easier to apply when there exists

documentation or specifications. Using documents is easy to understand what input is wrong and what we can use as input but for the other systems that lack documents could be really difficult. This scenario is common on system without documentation because we have to decide what is right and what is wrong, Figure 2 shows an example of how it could be difficult to understand what a normal execution path is and what it isn’t. In this example, there is no manual or “help” guidance to read or specified rules to refer to. Therefore we generally base Test Cases construction only using precondition and specifying what we assume to be normal intention and what is not.

Figure 2. Negative Testing is not obvious without specification

Our conclusion is therefore that Negative Testing creates test cases, where the input to the test case will give the system wrong data input compared to what the system would expect. The intention is that we will observe how the system reacts to these inputs. We expect that the Negative Test design technique will reflect how well the system manages to handle wrong input (for example using alert message

Page 19: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

19

or some other routine to handle wrong data). The negative test design technique is also working outside the accepted norm for the system, or in other words we do something with the system that was not the intention.

Imagine a text field where is possible to insert only numbers and we want use Negative Testing. We have to define which values are accepted and which are not. One example is to define the allowed and not allowed input domain. We use an ASCII Table (table 1) to show all one character possible inputs.

TABLE 1. ASCII TABLE

Positive values domain are from 48 to 57 (considering characters referring to Dec column) and Negative values domain are from 0 to 47 and from 58 to 127 (considering characters referring to Dec column in the table 1).

This test design technique will result in a series of test cases, where one or a combination of more values in “Negative values domain” can be used as input for TCs in Negative Testing. Again, a combination of Positive values domain and Negative value domain could produce more TCs.

Page 20: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

20

3.3 BOUNDARY VALUE ANALYSIS

“It requires that the tester select elements close to the edges, so that both the

upper and lower edges of an equivalence class are covered by test cases. 1. If an input condition for the software-under-test is specified as a range of

values, develop valid test cases for the ends of the range, and invalid test case for possibilities just above and below the ends of the range.

2. If an input condition for the software-under-test is specified as a number of values, develop valid test case for the minimum and maximum number as well as invalid test cases that include one lesser and one greater than the maximum and minimum.

3. If the input or output of the software-under-test is an ordered set, such as a table or a linear list, develop tests that focus on the first and last element of the set.”[62]

BVA is a rule based TDT that can be automated, if the range can be clearly defined in an ordinal, countable set.

During our study we were a little confused between Equivalence Partitioning and Boundary Values Analysis because in some case they are very similar and some TCs are the same. Now we can say that BVA is between Equivalence Partitioning and we are going to explain it. We refer only to numeric EP.

We would like to show an example to better understand this sentence. We consider the following domain: 5<=X<=20. In Equivalent Partitioning we have: Class A: 5<=X<=20 Class B: values > 20 (we expected that all these data are managed in the same

manner and it is different from class C) Class C: values <5 (we expected that all these data are managed in the same

manner and it is different from class B) In BVA we have the following boundaries using 5 and 20 as extreme values: Possible Test Case on 5 value: use 4 as input Possible Test Case on 5 value: use 5 as input Possible Test Case on 5 value: use 6 as input Possible Test Case on 20 value: use 19 as input Possible Test Case on 20 value: use 20 as input Possible Test Case on 20 value: use 21 as input The following figure (Figure 3) shows exactly what we mean.

Figure 3. BVA is aiming to check the borderline between Equivalence Partitioning for this set

Page 21: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

21

As we can see if we represent Equivalent Partitioning and we mark in blue the

classes and we mark in red values of BVA we can see that BVA is between Equivalence Partitioning. Generally in BVA we have 3 possible test cases for each extreme value.

Referring to Figure 3 we can see that BVA is composed by three values and they are the extreme value, the previous lower-bound of the extreme value and the next upper-bound of the extreme value.

Concluding we can say that the number of BVA in a system is equal to number of extreme values multiplied by three. This is not a rule but it is the routine used in our project. In Boundary Value Analysis we focus testing on extreme values and on the follow value or the previous value in according with the current domain. Each value is a TC.

3.4 EQUIVALENCE PARTITIONING

”Partitioning testing techniques must produce distinct partitions of the input

domain and that none may overlap, as any intersections could not be considered homogeneous values within these intersections would not behave similarly to both partitions.”[59]

In Equivalent Partitioning Testing the domain is divided in different sub-

domains and there is an assumption that all data (values, characters and so on) within a sub-domain (class or Partitioning) is treated the same by the system. It is really useful to cover untouched part of the system with other test techniques.

In equivalent Partitioning testing we should divide domain in sub-domain and also add external sub-domains. An example of different partitions using domain showed in boundary values (Figure 3) should be:

Partitioning A: Values inside domain (5,6,7…18,19,20) Partitioning B: upper bound values (> 20) Partitioning C: lower bound values (<5) We make “Partitioning B” and “Partitioning C” because we expect that values

greater than 20 are handled differently from values lower than 5 (for example with different error message or different routine)

After this study we are able to make TC easier because values from each class could be an input for a Test Case.

3.5 RANDOM INPUT

“It is creating tests where the data is in the format of real data but all fields are generated randomly, often using a tool.

1. The tests are often not very realistic. 2. Many of the tests become redundant. 3. The tests cannot be recreated unless the input data is stored (especially

when automated tool are used).

Page 22: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

22

4. There is no gouge of actual coverage of the random tests.”[63] p 175. “Random input is one of the very few techniques for automatically generating

Test Cases. Test tools provide a harness for the AUT (Application Under Test), and a generator produces random combinations of valid input values, which are input to the AUT. This technique can be highly effective in identifying obscure defects.”[61 p18]

Random input is not always applicable on each domain. During our studying

we were in front of an application that manage UML domain. We haven’t found a method to give random values in UML domain to the system.

In our way of thinking in Random Input use automated tools to create input data and check how the system respond, because most of the time it can be used for “crash-proofing” or to see if the system will “hang together” under adverse impact.

Using copy and paste of random values on systems we encountered many unexpected results. On java application for example (such as the system propose on section 6) they have some sort of data input checking when we digit character by keyboard but they are not capable to understand if we use copy and paste of the extracted values from “The Hat” (see Figure 4).

Figure 4: The Hat interface

The Hat is a free tool from “Harmony Hollow Software” to generate possible

input to make Test Cases. This tool has two different ways to store input. A

Page 23: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

23

Manual insertion of values that consists in type many different input and then press on shuffle to receive a value.

The Hat has as second option the possibility to take a .txt file that contains values and when we click on shuffle and it extracts the value that we use as input for the TC. Each line of the .txt represents a possible value.

3.6 FAULT INJECTION

“Fault injection, i.e., the deliberate introduction of faults into a system-the

target system-is applicable every time fault and/or error notions are concerned in the development process. When fault injection is to be considered on a target system, the input domain corresponds to a set of faults F and a set of activations A that specifies the domain used to functionally exercise the system and the output domain corresponds to a set of readouts R and a set of derived measures M. Together, the FARM sets constitute the major attributes that can be used to fully characterize fault injection [60]”

In our way of thinking, Fault Injection is the most important Code Level

Testing of studied TDT. We essentially modify system at source code and we look for change at system level.

Fault Injection Testing gives good results on familiar environment or when we can reconstruct Software Architecture. Construct good test cases with Fault Injection needs some requirements as a good knowledge of language of development otherwise we cannot understand what we are touching and why. Again, we need a good knowledge of the system indeed Fault Injection is easier to apply with this precondition.

If we have the possibility to understand how components are connected and how they communicate each other we can observe error propagation into the system and looking for how affected component react to code change.

Combine Fault Injection with Coverage Testing could give great result because we want test with Fault Injection untouched area of the system by Coverage Testing and increase the test coverage average.

Generally we select Test Cases in according to the covered source code. We try to select unexplored area of source code.

3.7 EXPLORATORY TESTING

“Exploratory testing is a test design technique where the tester actively

controls the design of the tests as those tests are performed and user information gained while testing to design new and better tests.”[64]

We decided to discard Exploratory testing because it is not possible to repeat

test cases and they do not contain an expected outcome/verdict. We use exploratory Testing just to explore and use the system.

Page 24: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

24

4. OPEN SOURCE SYSTEM

Open source doesn't just mean access to the source code. The distribution terms of open-source software must comply with the following criteria according to Peres and Bruce pages from 177 to 180 [35]:

“TO BE OPEN SOURCE, ALL OF THE TERMS BELOW MUST BE APPLIED TOGETHER, AND

IN ALL CASES. FOR EXAMPLE, THEY MUST BE APPLIED TO DERIVED VERSIONS OF A

PROGRAM AS WELL AS THE ORIGINAL PROGRAM. IT'S NOT SUFFICIENT TO APPLY

SOME AND NOT OTHERS, AND IT'S NOT SUFFICIENT FOR THE TERMS TO ONLY APPLY

SOME OF THE TIME.”

• Free Redistribution • Source code • Derived Works • Integrity of The Author's Source Code • No Discrimination Against Persons or Groups • No Discrimination Against Fields of Endeavor • Distribution of License • License Must Not Be Specific to a Product • License Must Not Restrict Other Software • License Must Be Technology-Neutral

Free Redistribution means the license may not require a royalty or other fee for such sale. It is possible to make any number of copies of the software and sell or give them away, without pay anyone. Source code means the program must include source code. The source code must be the preferred form in which a programmer would modify the program. Deliberately obfuscated source code is not allowed. Intermediate forms such as the output of a preprocessor or translator are not allowed. Derived Works means the license must allow modifications and derived works. Integrity of The Author's Source Code means the license may require derived works to carry a different name or version number from the original software. No Discrimination Against Persons or Groups means the license must not discriminate against any person or group of persons. No Discrimination Against Fields of Endeavor means the license must not restrict anyone from making use of the program in a specific field of endeavor. Distribution of License means there are no restrictions on its use. It can be used in a business or for genetic research. License Must Not Be Specific to a Product means the rights attached to the program must not depend on the program's being part of a particular software distribution. It must remain free if it is separated from the software distribution. License Must Not Restrict Other Software means the license must not place restrictions on other software that is distributed along with the licensed software. License Must Be Technology-Neutral means no

Page 25: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

25

provision of the license may be predicated on any individual technology or style of interface.

4.1 OPEN SOURCE SELECTION CRITERIONS

For the sake of this experiment, we have chosen to look at open source based software systems. We are only using open source systems of small size (until 300.000 code lines, where the number of code lines doesn’t include jar file and dll), since we preferred to look at a many systems, instead of focusing only on a few in the limited time available. We also choose open source systems of different type to have a more ample range of different systems. A list of different types of open source systems is shown below:

1. Bank / Insurance / Economic 2. Management / Calculate 3. Modeling / Development 4. Graphic / Paint 5. Game / Wits 6. Communication / Government 7. Military 8. Medicine/diagnostics 9. Tools 10. Embedded 11. Robot (industrial) 12. Electronic 13. Web / Internet 14. Mobile application

For our experiment, we are only using open source systems from the first five types.

Page 26: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

26

4.2 OPEN SOURCE USED

A table with all the characteristic of the systems that we used in our experiment. The table below is divided in seven columns, and in the first one is written the name of the system and in the second one there is the type in according of the type selected in the previous section.

System Type Number

of

versions

Size of

Software

First

Release

Number of

release

Number of

downloads

IRC Client [32]

Communication

1

85,4Kb

12/17/2002 8:20:42

AM

1

31066

Age Calculator [6]

Management / Calculate

1

16,2Kb

04/17/2002 6:32:32

AM

1

7208

DraW [26] Graphic / Paint 2 341,6Kb 20 March 2006

1 1404

Image Processing [5] Graphic / Paint) 1 45,8Kb 08/02/2007 1 5769

CleanSheet [4] Management / Calculate

4 1,1Mb 12 May 2005

4 6631

UMLet [20] Modeling / Development

9 5,99Mb not available

19 not

available

Bejeweled [31] Game / Wits 1 110Kb 11/09/2006 16.19

1 6499

Bomberman [27]

Game / Wits

2

2Mb

August 2001

It arrives at

2.4 but we

haven't

more

information

not

available

Euro Budget [25]

Bank / Insurance / Economic

2

1,1Mb

16 August

2002

2

7575

Student Helper [39] Management / Calculate

1 39Kb 9/17/2003 1 30934

JavaJUSP [38] Management / Calculate

1 2Mb not available

2 not

available

Bank System [40]

Bank / Insurance / Economic

1

169Kb

12/10/2003

1

33677

Image J [28] Graphic /Paint 1 631Kb not available

2 not

available

Latex Draw [30] Graphic /Paint 2 4,2Mb 28 january 2006

16 not

available

Jmoney [24]

Bank / Insurance / Economic

4

1Mb

10 March

2001

16

3723

TABLE 2. SYSTEMS USED IN THIS EXPERIMENT

Page 27: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

27

5. SYSTEMATIC USE OF TEST DESIGN TECHNIQUES

In this section we are explaining step by step how systematic test is done on our different systems. We also explain both our test design technique, and our test cases for the system under test. In this section we describe our method used initially of this experiment, and how we understood that we need a more systematic routine. We found that our initial approach was useful to make code level testing and help us select appropriate test cases and we are going to explain it inside this section.

5.1 MOTIVATION ABOUT SYSTEMATIC ROUTINE

Planning was an important task of this project because it is the result of all researches on data input domain and it was refined many times during the development of the project.

We show how our approach used at beginning of the project can be used to support the validity of a systematic routine.

At beginning we tried to develop our routine in perspective of system size’s and language familiarity. Documentation is a problem on open source system because they should be pouring or affected by erosion and drift and for these reasons we need a double way approach.

The approach is divided (Figure 5) into 2 different paths that depend on what we have at hand.

The documentation approach (Blue path in Figure 5) is the simplest one because we have documentation and we can use it as a guide to understand the system and to plan testing. We have to be careful about drift and erosion and validate documentation using requirements, software architecture and running system to evaluate what we have at hand (Figure 5 point 1.a). For these systems there are no problems about size and we can test them on each level and with any data input technique when applicable and use documentation as a guide to test the Open Source System (Figure 5 point 1.b and Testing phase).

Figure 7 (Red path) even shows the approach used when we have no available documentation.

Page 28: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

28

Figure 5. Our first approach to Testing

To better understand each Level of Testing we need to draw the Software Architecture of the system and use it as a guide for testing. Using combinations of Testing Techniques, results of them and software engineering notions it will be possible to improve Software architecture and continue with testing.

Here is showed the approach step by step: a) Determine system dimension b) Research for available documentation (Figure 5, point 2.a) c) Exploratory Testing and first draft of Software Architecture (Figure 5, point

2.b) d) System Level testing and when is possible improve Software Architecture

(Figure 5, point 2.b, 2.c) e) Integration Level testing and when is possible improve Software

Architecture (Figure 5, come back from point 2.c to 2.b) f) Code Level testing and when is possible improve Software Architecture

(Figure 5, point 2.b and 2.c) g) One more iteration from point a) to f) to check robustness of data and

conclude testing (Figure 5, Testing point). We start looking at source dimension to understand System’s dimension and

learn more about code. If we have a small or a normal size system this approach

Page 29: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

29

give good result otherwise we will spend more time in Architectural Recovery than in testing. A small, normal or big size system is classified in the following manner:

If we talk about community of open source where we have a maximum of 3 developers for a system we can say:

• Small size system: from 3 to 5000 lines of code • Normal size system: from 5001 to 15.000 lines of code • Big size system: over 15.000 lines of code • Otherwise in term of big companies we say: • Small size system: from 3 to 280.000 lines of code • Normal size system: from 280.001 to 2.500.000 lines of code • Big size system: over 2.500.000 lines of code If we have not a Path Coverage Testing tool for the current environment we go

to “step b of stepwise description. We read about main requirements on the web page of the application and

looking for some section on application such as “about us”, “about” or “references”. A good source of information are forums indeed open source system are upload on free community and they discuss about software on them.

Step 2: We run the application and became confident with it. Time strictly depends by system and tester skills. In this phase exploratory testing is a good alley. We start in draw a draft of Software Architecture using elements at hand;

Step 3. We commence with System Testing. We improve the Software Architecture trying to split big components (discovered before) into smaller components or to create a component composed by subcomponents (depends by current domains);

In Step 4 we can start with Integration Level Testing assuming we have a clear architectural view of the system, and we continue to redefine Software Architecture with the obtained results;

Finally, we use it as a guide to start Code Level Testing and (if you want but is not necessary for testing) complete the architecture;

Restart the approach and looking for inconsistencies and ambiguity. Here we use basic modeling techniques to clarify ambiguities.

This approach works on small systems but it has some problems on medium systems and more problems with big size systems indeed became hard draw architecture using testing because the understandability is much more difficult in big system. We understood that we need a different way to test the systems. Unfortunately it is not possible to create a universal routine to test each kind of system.

A tester has to open the system, study it and be able to associate fields or functionalities to Testing Techniques and he has to be capable to combine these techniques in according with their current domains. A tester must be systematic and results must be valid. In the next paragraph we show our view. Necessity of a Systematic approach, the creation of test cases without principles and the not valid results are the reasons because we leave from this approach.

This approach was used at the beginning and we understand the weakness of it. The explanation is simple: It is not possible to define precisely how many Test Cases we could create and which parameters we need as input.

Page 30: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

30

5.2 SYSTEMATIC TEST CASE CREATION

As claimed earlier, it is used to understand how the system works and identify main requirements otherwise we read at system requirement documents (if it exists).

We can resume our approach in the following tasks: 1. Exploratory testing. 2. Study domain. 3. Domain fusion. 4. TCs creation

Task 1: We check the whole system for all data input fields. We collect those

inputs to work on them in tasks 2 and 3. Task 2: We create groups for different input domains and system functionality.

We collect data input from different system areas. In this way we have a good view all data input grouped in different locations. I.e. As is show in figure 6 we should collect data into the following manner:

Board: Name(character, size length 100), Width (numeric, size length 6), Height (numeric, size length 6).

Default Value: Level (numeric, size length 6), Point to next level (numeric, size length 6), Time decrease (numeric, size length 6), interval (numeric, size length 6), bonus (numeric, size length 6).

Task 3: All homogeneous values (same type and same size length) and into the

same functionality are collect as a unique domain as it is shown in figure 6 (Dots, dashes and line). For the considered system example (figure 6), we can put together in a same group text fields with same values. This means that when we test one input field into the same group we test all the fields into that group. In this example we create 3 different groups • Dots (Field Name) • Dashes (Fields Requirement factor and Decrease factor) • Line (All the left fields) Where into the dots circle we have text fields, into the line circle we have integer number fields and into the dashes circle we have float number fields. We also can create different groups for different domain where different domains could be symbol fields, date fields, images.

Page 31: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

31

Figure 6. GUI example

Task 4: See section 5.6.

5.3 TEST TECHNIQUES USED IN EXPERIMENTS

For each system we create and execute at least 20 test cases but the number could be different between the systems. This depends on our ability to perform error guessing because sometimes it is possible to have two error guessing test cases and sometimes it is impossible to have one. We select test cases using on the following test design techniques: four Normal test cases and four Negative, three Random, three Boundary values, three Equivalence Partitioning, two Fault injections and one code statement coverage to improve Fault injection test cases and in addition we are basing our Error guessing on our initial exploratory play with the system.

Page 32: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

32

5.4 LEVEL OF TEST USED IN EXPERIMENT

In this section we want to have a short introduction on different levels of software development process.

Figure 7. V-model

Typically, it is possible to make testing on different levels. In our work, we create test cases on different levels and we use a simplified test process model (V-model, see fig. 7) to identify different levels:

� Code or Unit � Integration � System � Acceptance

� Code or Unit level [2]: A unit is the “smallest” piece of the software that a developer creates. It is typically the work of one programmer.

� Integration level [9] is a test that explores the interaction and consistency of successfully tested components that is, components A and B have both passed their component test. They are aggregated to create a new component C= (A, B). Integration testing is done to explore the self-consistency of the aggregate. The aggregate behavior of interest in integration testing is usually observed at the interface between the components.

Page 33: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

33

� System level: Typically system testing [2] includes many types of testing: functionally, usability, performance and so on. This kind of testing is useful to confirm that all code modules work as specified, and that the system as a whole performs adequately on the platform on which it will be deployed.

� Acceptance level [2] is defined as testing, which when completed successfully, will result in the customer accepting the software and giving us their money.

Levels we applied the different test design techniques are: system, integration and code. A table with our selection in this experiment of the combinations between test technique and levels is shown below.

Level Test Technique

System

Integration Code

Normal

X - -

Negative

X - -

Random

X - -

Boundary values

X - -

Eq. Partitioning X -

-

Fault injection

- - X

Error Guessing X - -

Statement Coverage X X X

TABLE 3. TESTING / LEVEL

Using the character “X” in the position Normal\System means we used Normal technique at System level instead where we don’t made testing is marked with the “-“symbol.

Page 34: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

34

5.5 SELECTION OF INPUT VALUES

After different groups are created, we define a range of values where it is possible select specific values for each domain and for each test techniques. To do this we need to use an ASCII table (see Fig 11).

TABLE 4. ASCII TABLE

We create range of values/group using the column “Dec” of an ASCII table. For example, if we want to test an integer field, we choose values from the column Dec using all the combination of values from 48 to 57. In this way, we are sure to use only integer values when an integer field is tested. All the possible combinations have got an ID code and are shown in the follow page.

Page 35: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

35

ID code

Type Values Range

a.1 Characters – Uppercase [65..90]

a.2 Characters – Lowercase [97..122]

a.3 Characters- Both [65..90] and [97..122]

b Integer number [48..57]

c Float number [48..57].[48..57] or [48..57],[48..57]

d Symbol [32..46] and [58..64] and [91..96] and [123..126]

e Date [48..57]/ [48..57]/ [48..57]

f.1 File random files are written using ASCII table’s values and loaded into the system

f.2 Image The load random image I use www.google.com and write into search field the name Image and select the first one found

TABLE 5. ASCII RANGE VALUES

5.6 TEST CASES CREATION

Normal test case are created to test “normal” execution of the system, which means that the main intention of the usage of a function is executed - which what appears to be the “default” normal values expected and testing using system’s documentation. It is not always possible because most of the open source lacks system documentation. By Graham [57] “...Because requirements form the basis for tests, obviously we can’t do any testing until we have decent requirements…” and when we have no access to documentation, we use before exploratory test to “play” with the system trying to identify the main requirements. We aimed at describing the main features, and from them define the input domain. It means that when we are testing characters fields of a system, we pick up values into a specific range. In according to the table 6, we use the range (a) for character values. Negative test cases are based and created on which field has to be tested. We have to suppose to put wrong data input into the system. Insert wrong values it means insert values outside of appropriate range, this means if we are testing a character field of a system, we pick up values into a specific range. In according to the table 6, we use values from the range (b), (c), (d), (e) and if it is possible also (f). To see all the possible range of values see table shown above.

Page 36: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

36

Domain Type Normal range Negative range

Characters (a) (b), (c), (d), (e) and if it is possible also (f)

Characters Lowercase (a) (a.1) and (a.3)

Characters Uppercase (a) (a.2) and (a.3)

Text (a) (b), (c), (d), (e) and if it is possible also (f)

Integer numbers (b) (a), (c), (d), (e) and if it is possible also (f)

Float numbers (c) (a), (b), (d), (e) and if it is possible also (f)

Symbols (d) (a), (b), (c), (e) and if it is possible also (f)

Date (e) (a), (b), (c), (d) and if it is possible also (f)

Files/images (f.1)/(f.2) (a), (b), (c), (d), (e) and empty file or image

TABLE 6. NORMAL AND NEGATIVE RANGE VALUES

Random input test case are created by inserting random input values by defining the input domain (when possible) using an automated tool called “The Hat” (see section Appendix “The Hat”) and verifies the response of the system.

Boundary value analysis test cases are based and created on which field has to be tested. We try to concentrate software testing effort on cases on boundary values of a system also called limit of valid values (see table 7), but sometimes it is not so easy recognize them or there aren’t. Equivalence Partitioning test cases are based and created on which field has to be tested. We divide a domain in different equivalent Partitions (when it is possible) and it is useful because in this way we can reduce the number of test cases and we create a test case for each of this Partitions. For example, if we are testing an integer field where its allowed range is from 10 to 100, we’ll create test cases using values from the range (b) (see table 7).

Page 37: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

37

Domain Type Boundary values range Equivalence Partitioningrange

Characters or text not applicable (a) and (b), (c), (d), (e) and if it is possible also (f)

Integer numbers It depends by domain of the system

(a) and (b), (c), (d), (e) and if it is possible also (f)

Float numbers It depends by domain of the system

Values in (b) in according to the field under test

Symbols not applicable Values in (c) in according to the field under test

Date Boundary values for day depends on values assumed by month and year, for more detail see Karan and Wenying page 72 [15].

not always applicable

Files or images It depends by domain of the system

not always applicable

TABLE 7. BVA RANGE VALUES

Test cases for fault injection are done inserting faults in the core of the system. It is by Hsueh, Tsai and Iyer [59] "...the program instruction must be modified before the program image is loaded and executed." We change the core of the program (initial value) by entering faults (mutated value) and checking if the fault is propagated in the code or if it is caught or not (sometimes it crashes or it is possible seeing strange system behavior. Examples of faults are shown in Table 8).

Code Type Initial Value Mutated Value

FIJCO

Comparison

Operators

==

!=

<

>

<=

>=

!=

==

>

<

>=

<=

FIJFP

Function

parameters

Modify the

parameter/s of a

function when it is

called

FIJDL Skip condition1 If (..){...} /* If (..) {…} */

FIJCV Change value Ie. While(i<10){…} Ie. While(i<20){…} TABLE 8. FAULT INJECTION TABLE

1 “If” condition is commented, in this way some controls are skipped and it is possible analyze unusual system behavior, it is also

possible comment an whole function.

Page 38: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

38

Error guessing: it depends by tester and how much the tester knows the system.

5.7 TEST CASE TABLE.

In this section we are going to show how to write test case using a table. In this table we add all the information useful to understand a test case and in according to Kamde, Nandavadekar and Pawar [56] a test case table have to have a name and number, a stated purpose that includes what requirement is being tested, a description of the method of testing, actions and expected results, does not exceed 15 steps, saved in specified formats, file types. We use a detailed test case table with the following values:

� Test case ID � Test Technique and Level � Input � Comment � Expected value � Actual value � Time � Verdict

Where Test case ID is an ID used to identify the whole test case, it is composed in the following way XXXYYYZ00 .

Where XXX is a short name of the system, YYY is the name of the test (see the Fig 14), Z is the level testing (S= System, I =Integration and C =Code) and 00 is an identification number.

Name of test NRL Normal NEG Negative RND Random BVA Boundary Value

Analysis FIJ Fault Injection

CCV Code statement coverage ERG Error Guessing

TABLE 9.TEST DESIGN TECHNIQUES

Test Technique and Level is used to write the name of test design technique (see section 3) and level of test (see section 5.4).

Input is the description of the input. Describe an input is not simple as it looks like. First, we need to know input domain, what is an input and if it we are putting it into the right box or not. Second, we need to specify really clearly input at integration level because we must explain with part of system (or group of

Page 39: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

39

components) we are testing and in which manner. Third, is really useful to collect data at the end of testing and compare test cases results.

Comment we write comment or note in this field. Generally we use it to describe components on Integration Level. Expected value we show what we expected as result. Actual value is the result value after test case execution. Time this is the time expressed in minutes that we spend in that test case. This value contain time to identify parts of the system and understand where apply a test case. The Verdict field is the result of comparison between Expected value and Actual value, if they are the same the result of test case is “Passed” otherwise “Failed”.

5.8 COMPARISON BETWEEN SYSTEMS

In our work we tested fifteen systems and for each of them we collected data useful to compare test techniques (see section 7.1) and in the same time we have written a table answering two questions:

1. Time to identify unique part of the system where to apply TC. 2. Time to create a unique TC.

Using these two questions, we created a table (see table 20) based on the time spent to identify unique part of the system where to apply test case, and the sum of the times spent to identify unique part of the system for each testing technique is used to classify these systems (see section 7.4). An example is showed below:

Question TDT

Time to identify unique part of the system where to apply TC

Time to create a unique TC

Normal Negative … Fault injection Error Guessing

Figure 8. Generic Comparison Table

In the left side of the table there are test techniques used in our work to test a system and on the top of the table there are two questions used to compare the systems.

We also have created a testing classification (see tab. 21) using the sum of the times spent to test all the open source systems with the same testing technique (see the last row of the tab. 20) based on recognizing where apply TCs on the system.

Page 40: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

40

6. CASE STUDY

In this section we are going to show a complete case of study of a system called BankSystem.

6.1 STUDY OF THE SYSTEM

A generic description of the program is: “Bank System is very useful program, it is used for banking purpose. It is possible create/delete an account, deposit/withdraw money and print customer balance report. It is also possible modify the look of the program changing the text color or theme.”. Program language of the system is JAVA and the number of test cases done is 22. A screenshot with the main requirements of the system is showed below (Fig 9). To know more about this system see appendix B.

FIGure 9. BANKSYSTEM SCREENSHOT

We spent 33 minutes in exploratory testing exploration and in source code exploration to identify functionality. We are going to show a list of each discovered requirements (see appendix B to see the explanation of data domain for all the fields/requirements).

Page 41: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

41

Requirement number Requirement name 1 Bank account creation 2 Deposit money 3 WithDraw money 4 Search Customer 5 Delete Account Holder 6 Print 7 Utilities

TABLE 10. BANK SYSTEM'S REQUIREMENTS

6.2 PRACTICAL USAGE OF TEST DESIGN TECHNIQUES

Bank System (BKS is short name used for our Test Case ID). A table with some TCs done on the system for each testing technique is showed in the following pages (to see the whole TCs table, see appendix B). Normal technique

Test

Case

ID

Test

Techni

que

and

Level

Input

Com

men

t

Expecte

d Value

Actual Value

Ti

me

Verd

ict

BKS

NRL

S01

Normal

-

System

• Run the system

into the folder:

C:\Documents

and Settings\

user

\Desktop\Compl

ete_B168248121

02003

• File -> Open new

account

• Fill in the

following fields

• Account No: 123

• Person Name:

Test

• Deposit Date:

April – 28 – 2009

• Dep. Amount:

1000

• And click the

button Save

A new

account

is

created.

Account created.

Image rotated of

90 degree

8

mi

n

Pass

ed

TABLE 11. BANK SYSTEM - NORMAL TCS

After exploratory testing, normal testing is the first test technique use to test the main functionality of the system. The verdicts of those TCs using this technique are almost all the time “Passed”.

Page 42: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

42

Negative technique Test

Case

ID

Test

Techni

que

and

Level

Input

Comment

Expected

Value

Actual

Value

Ti

me

Verd

ict

BKS

NEG

I06

Negativ

e -

System

• Run the

system into

the folder:

C:\Document

s and

Settings\

user

\Desktop\Co

mplete_B168

2481210200

3

• Edit ->

Deposit

money

• Fill in the

following

fields

Account No:

123 and

Dep.

Amount:

- 100

Insert on

Dep.

Amount

field a

negative

value.

Domain: d

Impossible to

write

negative

values

Value not

allowed

8

mi

n

Pass

ed

TABLE 12. BANK SYSTEM - NEGATIVE TCS

After normal testing, negative testing is done. We use the same TCs done with normal technique but we insert wrong values in some field (the wrong value depends by field’s domain). In nearly every case the result of this TCs is negative or “Failed” because not so much checking are done in open source systems.

Page 43: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

43

Random input technique Test

Case

ID

Test

Techni

que

and

Level

Input

Comment

Expected

Value

Actual

Value

Ti

me

Verd

ict

BKS

RND

I10

Rando

m -

System

• Run the program

C:\Programmi\T

he Hat\th.exe

• File -> Import

names from file

and load the file

“C:\Documents

and Settings\

user

\Desktop\The

Hat – Random

file.txt”

• Click on Pick

individual name

and click the

button Pick.

• Result is:

1GFAG43vq43rEé

*ç°_°__:;;Ar32123

• Run the system

into the folder:

C:\Documents

and Settings\

user

\Desktop\Compl

ete_B168248121

02003

• File -> Open new

account

• Fill in the

following fields

• Account No:

1GFAG43vq43rEé

*ç°_°__:;;Ar32123

• Person Name:

1GFAG43vq43rEé

*ç°_°__:;;Ar32123

• Deposit Date:

April – 28 – 2009

• Dep. Amount:

1GFAG43vq43rEé

*ç°_°__:;;Ar32123

• And click the

button Save

Insert the

random

input value

“1GFAG43

vq43rEé*ç

°_°__:;;Ar32

123” into

the fileds

used to

create a

new

account

Domain: a,

b, d

Impossible

to create a

new

account or

an alert is

shown.

Account

created.

12

mi

n

Faile

d

TABLE 13. BANK SYSTEM - RANDOM INPUT TCS

Page 44: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

44

Random input technique is also another technique where in nearly every case the result of these TCs is “Failed”. Very useful to discover faults inserting disparate and strange input. It is not always usable (see section 7.1). Boundary values technique

Test

Case

ID

Test

Techniq

ue and

Level

Input

Comme

nt

Expected

Value

Actual

Value

Ti

me

Verd

ict

BKS

BVA

I12

Boundar

y Value

Analysis -

System

• Run the system

into the folder:

C:\Documents

and Settings\

user

\Desktop\Compl

ete_B168248121

02003

• Edit -> Deposit

Money

• Type 1879

(Standard

Account) into the

field Account No.

and Copy-and-

Paste “-1” into the

filed Dep.

Amount.

• And click the

button Save.

Bounda

ry

values

are [-1,

0, 1]

Impossible

subtract or

insert a

negative

value into

Dep.

Amount

filed.

Value

allowed

and

withdra

w from

the Dep.

Amount

of

Account

No.

1879

9

mi

n

Faile

d

TABLE 14. BANK SYSTEM - BVA TCS

After normal, negative and random input test techniques, when it is possible we want to test system boundary values (useful but not always easy to use, see section 7.1). We have to make 3 TC for each boundary (see section 3) and almost all of the times, the verdict for 2 of these TCs is “Failed”. These mean, open source systems are not tested for values out of boundary.

Page 45: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

45

Equivalence Partitioning technique Test

Case

ID

Test

Techniq

ue and

Level

Input

Comment

Expected

Value

Actual

Value

Ti

me

Verd

ict

BKSE

QPI15

Eq.

Partitioni

ng -

System

• Run the

system into

the folder:

C:\Docume

nts and

Settings\

user

\Desktop\C

omplete_B1

682481210

2003

• Edit ->

Deposit

Money

• Type

Account

No.: 1879

and Dep.

Amount: -

10000

Inserting a

negative

number

into Dep.

Amount

filed.

Domain:

out of b

Alert

message

when the

value is

typed or

impossible

to write

negative

number.

Impossib

le to

write

negative

number.

9

mi

n

Pass

ed

TABLE 15. BANK SYSTEM - EQ. PARTITIONING TCS

Equal Partitioning is also similar to boundary value analysis because we can create TCs only if we have a way to create partitions on a domain. The verdict of this technique for value out of the normal/accepted Partitioning is almost all of the time “Failed”.

Page 46: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

46

Error guessing technique Test

Case

ID

Test

Techniq

ue and

Level

Input

Comment

Expected

Value

Actual

Value

Ti

me

Verd

ict

BKS

ERG

S18

Error

Guessing

- System

• Run the

system into

the folder:

C:\Docume

nts and

Settings\

user

\Desktop\C

omplete_B1

682481210

2003

• File ->

Open new

account

• Fill in the

following

fields

• Account

No: 2

• Person

Name: Test

• Deposit

Date:

January – 1

– 2000

• Dep.

Amount:

1000

And click

the button

Save

Create a

new account

with date

filed less

than actual

day

Actual day:

April 28

2009

Impossible

to create a

new

account

with a date

older than

Actual day.

Account

created.

8

mi

n

Faile

d

TABLE 16. BANK SYSTEM - ERROR GUESSING TCS

It is a nice and strange testing technique because the creation of TC depends by tester imagination/experience with the software.

Fault Injection Testing Affected function is FindAccount.java and in particular Findrec() function. We replace source code from line 161 to 167 Test Case ID BKSFIJC21 Expected result: We want change find functionality. We want restrict range into searching holder feature.

Page 47: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

47

FIGure 10. SELECTED SOURCE CODE IN BKSFIJC21

Figure 11. Affected source code in BKSFIJC21

Result: Data parameters change. It’s possible to select only Account No showed in the first row of the “View All Account Holders” (see Fig. BKSFIJC21 result).

FIGure 12. RESULT FROM BKSFIJC21 AT SYSTEM LEVEL

Time: 37 minutes Verdict: Pass Comment: Search function work correctly. Code coverage of this TC is showed below (Table Code Coverage 2)

Page 48: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

48

Table Code Coverage 2

Code

coverage

BKSFIJC21

Figure 13 Code coverage BKSFIJC21

On overview of this case of study about test case results is showed below (for more details see appendix B).

• Normal technique - TCs done: 4 (all passed). • Negative technique - TCs done: 4 (all passed). • Random input technique - TCs done: 3 (1 passed). • Boundary values technique - TCs done: 3 (1 passed). • Equivalence. Partitioning technique - TCs done: 3 (2 passed). • Error guessing technique – TCs done 2 (0 passed)

• Fault Injection Testing – TCs done 2 (all passed)

Page 49: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

49

6.3 CODE COVERAGE RESULTS

In this section we are going to show that is impossible have a coverage of all codes using only the system’s GUI (maybe it is possible with systems with small code size). We made again all the Normal and Negative TCs (see table in B appendix) and also other four about search an account by name, by number, scrolling a list of account or looking an account’s table (see the following table). Test

Case

ID

Test

Techn

ique

and

Level

Input

Com

men

t

Expected

Value

Actual

Value

Ti

me

Verdi

ct

BKS

NRL

S22

Norm

al -

Integr

ation

• Run the system into

the folder:

C:\Documents and

Settings\Savino\Des

ktop\Complete_B16

824812102003

• Edit -> Search by No.

and insert 1879 and

click Search.

Show info

account

1879

Account

found

and info

showed.

6

mi

n

Passe

d

BKS

NRL

S23

Norm

al -

Integr

ation

• Run the system into

the folder:

C:\Documents and

Settings\ user

\Desktop\Complete_

B16824812102003

• Edit -> Search by

Name. and insert

Salman and click

Search.

Show info

account

Salman

Account

found

and info

showed.

6

mi

n

Passe

d

BKS

NRL

S24

Norm

al -

Integr

ation

• Run the system into

the folder:

C:\Documents and

Settings\ user

\Desktop\Complete_

B16824812102003

• View -> View one-

by-one

Show info

accounts

Info

showed.

5

mi

n

Passe

d

BKS

NRL

S25

Norm

al -

Integr

ation

• Run the system into

the folder:

C:\Documents and

Settings\ user

\Desktop\Complete_

B16824812102003

• View -> View All

Customers

Show info

accounts

Info

showed.

5

mi

n

Passe

d

TABLE 17. SEARCH TCS - BANK SYSTEM

Page 50: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

50

Percentage of this coverage is 75.5% (see Fig. 14). After tested all the Open Source Systems, we can said that using Normal and Negative testing we can have a coverage from 60% to 80% of the code of these systems.

Figure 14. Complete Code coverage of Bank System

After saw the percentage of coverage, we can see which parts of code was not covered and create new TCs (using Fault injection, Random input or Equivalence Partitioning techniques) to cover those parts. In this way we can increase the percentage of covering of 10-15%.

6.4 CASE STUDY RESULTS

We can conclude that this system handled the errors because there are some check function on the labels and it is impossible insert values out of domain, but we discover some errors using Random input technique using copy-and-paste function, in this way the system doesn’t manage wrong values typed. Results of these TCs are shown below:

Name of test Number of

test cases

TC

Failed

NRL Normal 4 + 4 0

NEG Negative 4 0

RND Random 3 2

BVA Boundary

Value Analysis

3 2

EQP Eq. Partitioning 3 1

FIJ Fault Injection 2 0

CCV Code Coverage 1 -

ERG Error Guessing 2 2 TABLE 18. BANK SYSTEM – TCS SUMMARY

We found faults using Random input technique because we using copy-and-paste functionality to copy the random input values generated by The Hat and paste them into the system. We also found faults using boundary value and Equivalence Partitioning because we can insert into the system’s labels values not managed by the system and faults using error guessing because there are no control on the date. Results time table is showed in the next page.

Page 51: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

51

Question TDT

Time to identify unique part of the system where to apply TC

(min)

Time to create a unique TC

(min)

Normal 3 8 Negative 3 8 Random 3 12 Boundary values 6 9 Eq. Partitioning 7 9 Fault injection 10 40 Error guessing 4 9

TABLE 19. BANK SYSTEM RESULTS

Page 52: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

52

7. RESULTS

In this section we are going to show the results of this work. We divide this section in three subsections. In subsection 7.1 we are going to show results about test techniques used to test systems, in subsection 7.2 we are going to show the result about system coverage and in subsection 7.3 we are going to show results about systems classification in regards of test techniques.

7.1 TEST TECHNIQUES

Using some simple test techniques to test open source systems we found several faults in these systems especially using Negative, Random, Boundary value analysis, Equivalence Partitioning and Fault injection and the average of first time to failure is around 1 hour.

A comparison of test design techniques used for this experiment is done in the following pages. We can say that Normal and Negative have great limitations as test techniques, because they are not always repeatable. We can show how it is possible. This is of course in relation to the system. Some system does not behave deterministically. We use a simple game system to understand why they are not repeatable. A screenshot of the system is shown below.

Figure 15. Bejeweled Screenshot

Page 53: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

53

The resulting test case applying the test design technique must be repeatable, this means that to repeat it we must follow the step written into input field (table Fig 25). If we look at this system and we create a new test case where we want to move the shapes on the grid, it is impossible make it twice because when a new game is created, a new grid is loaded with different position of the geometric shapes and for this reason we can never have the same geometric shapes position twice. Therefore the technique is limited to use in some aspects of automatic design for some systems.

We also can say that Boundary values analysis and Equivalence Partitioning are not useful in the “drawing” systems, since almost all of the functionality of these systems is “click and drag” and for this reason it is hard or impossible find a range of values to use these techniques.

Random input technique means to construct the tool that selects input for the system. This technique is probably not feasible or economical, since one can ask if it would find any fault doing it, questioning the value of the technique. But it is applicable, but not effective enough. To save time we select a tool useful only for text fields, but in this way we miss many possible values that should be used as input.

Fault injection is always applicable using system where it is possible inspect the code and we can study system behavior in presence of faults. Injection is hard-coded and it can be used to emulate permanent faults. Using coverage tool it is also possible to determine the coverage of error detection and developers can create some mechanisms to recovery.

Error guessing is useful only if it is used by testers or developers who know the system. Common tests are such as division by zero, empty or null strings, blank or null characters in strings.

7.2 OPEN SOURCE SYSTEM’S COVERAGE

Percentage of system’s different coverage depends on our efforts In our work we tested only small systems and in almost of them we saw a percentage of coverage from 60% and 80% (see Fig.14) testing the GUI’s functionality using Normal and Negative testing, with a limited data selection. Using EclEMMA tool, we can see the uncovered code and create new test case (using other test techniques, see section 3) to cover those parts. In this way we can increase the percentage of covering of 15-20%.

Page 54: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

54

7.3 OPEN SOURCE INSTALLING PROBLEMS

As a part of testing, installation testing is often forgotten. Installation testing is important, if you want it to be available in open source. As a part of our thesis, we found some problems that should have been considered in an installation test. We can classify in three groups:

1. Dll and Jar missing: In this group there are all the systems with some missing files (dll or jar files), but it is possible download them using internet and build them into the system.

2. Failure: In this group there are all the systems with compilation error or missing file and it is impossible run them even if missing file are build into them.

7.4 SYSTEMS CLASSIFICATION

Before to read the results of the systems tested (see Table 20), we are going to explain how to understand them. How we said in section 5.6, when we are testing a system we fill a table answering to two questions:

1) Time to identify unique part of the system where to apply TC (min) 2) Time to create a unique TC (min)

In the figure below we focus our attention on the system “Age Calculator” and “Normal” testing technique.

Figure 16. How to read systems results table

The dashes circle (top-left corner) answered to the question “Time to identify unique part of the system where to apply TC” and the dots one (button-right corner) is the “Time to create a unique TC”. Sometimes we cannot use a testing technique on a system and this is showed into the table with the symbol “-“.

Page 55: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

55

Now, we are going to show a table filled answering two question said above . A table contains all systems tested in our work.

Test Technique

OS Systems

Normal

Negative

Random

Boundary Values

Eq. Partitioninging

Fault injection

Error guessing

Age calculator

1 5

1 5

1 7

- -

- -

5 20

- -

10 42

Clean Sheet

2 6

2 7

2 9

4 9

4 9

15 30

- -

33 76

Image Processing

1 5

1 5

1 7

2 5

2 5

7 20

- -

17 52

DraW

1 5

1 5

2 6

5 9

6 9

12 22

- -

30 63

ImageJ

2 6

2 6

3 7

7 6

7 6

12 20

- -

36 58

LaTeX Draw

2 6

2 7

2 7

5 6

6 6

15 20

- -

35 59

Bomberman

2 7

3 7

3 10

- -

- -

20 23

- -

28 47

Bejeweled

2 8

3 8

3 12

5 7

5 7

15 20

- -

33 62

Student Helper

2 6

3 7

3 12

4 -

4 -

14 22

- -

30 49

EuroBudget

2 5

2 5

5 7

2 5

3 6

15 17

- -

31 50

JMoney

2 7

2 7

2 10

4 9

4 9

15 22

- -

32 69

JavaJusp 2 6

2 6

2 8

4 9

4 9

18 23

3 5

35 76

Bank System

3 8

3 8

3 12

6 9

7 9

10 40

4 9

36 95

UMLet

2 6

3 6

3 10

4 7

5 7

15 25

- -

36 69

IRC Client

2 5

2 5

2 5

4 6

4 6

19 30

- -

33 57

28 106

32 94

37 129

56 87

61 88

197 354

7 14

TABLE 20. SYSTEMS RESULS

Page 56: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

56

On the basis of the table showed above, we are going to create a classification of the tested systems using the method based on the sum of the times spent (dashes circle) to identify unique part of the system for each testing technique. Table classification is showed below:

System name

Time to identify unique part of the system where to apply TC

Age Calculator 10

Image processing 17

Bomberman 28

Bejeweled 28

Student Helper 30

DraW 30

EuroBudget 31

JMoney 32

Clean Sheet 33

IRC Client 33

LaTeXDraw 35

JavaJusp 35

ImageJ 36

Bank System 36

UMLet 36

TABLE 21. SYSTEM'S CLASSIFICATION

Page 57: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

57

7.5 TEST TECHNIQUES CLASSIFICATION

We also have created classification of test techniques using the sum of the times spent to test all the open source systems with the same testing technique (see the last row of the table 20) based on simplicity to recognize where apply TCs on the system.

Test Technique Time spent to apply TCs

Normal 28

Negative 32

Random 37

Error guessing 37

Boundary value analysis 56

Equivalence Partitioning 61

Fault injection 197

TABLE 22. TEST TECHNIQUES CLASSIFICATION

In the table above, we can understand that normal, negative and random input don’t take several time to recognize where to apply TCs on the system. We cannot say the same for error guessing boundary value and equivalence partitioning because the values showed in the table are lower than normal time used to test all the systems. It was impossible use these last three test techniques in some systems. Whereas time to recognize where apply TCs using fault injection is the higher value even if there are three systems not tested with this technique (see table 20) because we spent time to check the code out. We also are going to show a graph (figure 17) bases on the table 22, to better understand how much time we spent for each techniques to test the systems.

Figure 17. Test techniques graph

Page 58: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

58

7.6 STATISTIC RESULTS

Tables below show statistics from our experiment. Table 23 shows the average for each TDT that occurs to reach the failure. The column into the middle show, in average, how many TC we need to reach a failure with the referring TDT during this experiment. The column on the right we can find the total amount of TCs for each TDT and the number of faults into them. We want to give attention on Error Guessing. We know that only 5 TCs are available and we encountered 5 faults and the number of TC to the first failure is not a trustworthy result because it come from only 2 system and they are not enough compared to the number of other TCs for each TDT.

Test Design Technique Number of TC to first failure (average)

Number of TCs have found Number of Faults

Normal 4.7 61 TC/13 faults

Negative 2.6 56 TC/21 faults

Random 2.9 47 TC / 16 faults

Error Guessing 1 5 TC/ 5 faults

Boundary Value Analysis 2.2 40 TC/18 faults

Equivalence Partitioning 6.8 34 TC/ 5 faults

Fault Injection 3.1 25 TC/ 8 faults

TABLE 23. FAILURE AVERAGE.

With the following table (Table 24) illustrate how many TC we have for each TDT and how many TC are failed for the referring system.

Age calculation

Name of test Number of

test cases

TC Failed

NRL Normal 2 0

NEG Negative 4 1

RND Random 2 1

BVA Boundary Value Analysis 0 0

EQP Eq. Partitioning 0 0

FIJ Fault Injection 3 2

CCV Code Coverage 1 -

ERG Error Guessing 0 0

Page 59: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

59

Clean Sheet

Name of test Number of

test cases

TC Failed

NRL Normal 5 3

NEG Negative 3 0

RND Random 5 2

BVA Boundary Value Analysis 3 2

EQP Eq. Partitioning 0 0

FIJ Fault Injection 5 1

CCV Code Coverage 1 -

ERG Error Guessing 0 0

Draw

Name of test Number of

test cases

TC Failed

NRL Normal 4 0

NEG Negative 4 1

RND Random 3 1

BVA Boundary Value Analysis 3 2

EQP Eq. Partitioning 3 1

FIJ Fault Injection 2 1

CCV Code Coverage 1 -

ERG Error Guessing 0 0

IRC client

Name of test Number of

test cases

TC Failed

NRL Normal 4 3

NEG Negative 3 3

RND Random 0 0

BVA Boundary Value Analysis 0 0

EQP Eq. Partitioning 0 0

FIJ Fault Injection 0 0

CCV Code Coverage 0 0

ERG Error Guessing 0 0

Jmoney

Name of test Number of

test cases

TC Failed

NRL Normal 4 0

NEG Negative 4 1

RND Random 3 1

BVA Boundary Value Analysis 3 3

EQP Eq. Partitioning 3 0

FIJ Fault Injection 2 1

CCV Code Coverage 1 -

ERG Error Guessing 0 0

Bomberman

Name of test Number of

test cases

TC Failed

NRL Normal 4 1

NEG Negative 1 0

RND Random 2 1

BVA Boundary Value Analysis 0 0

EQP Eq. Partitioning 0 0

FIJ Fault Injection 0 0

CCV Code Coverage 0 -

ERG Error Guessing 0 0

Page 60: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

60

UMLet

Name of test Number of

test cases

TC Failed

NRL Normal 5 1

NEG Negative 5 3

RND Random 5 2

BVA Boundary Value Analysis 6 0

EQP Eq. Partitioning 5 0

FIJ Fault Injection 0 0

CCV Code Coverage 0 -

ERG Error Guessing 0 0

Bjeweled

Name of test Number of

test cases

TC Failed

NRL Normal 5 1

NEG Negative 3 0

RND Random 4 1

BVA Boundary Value Analysis 3 1

EQP Eq. Partitioning 3 1

FIJ Fault Injection 0 0

CCV Code Coverage 0 -

ERG Error Guessing 0 0

Image Processing

Name of test Number of

test cases

TC Failed

NRL Normal 4 1

NEG Negative 3 4

RND Random 3 1

BVA Boundary Value Analysis 3 0

EQP Eq. Partitioning 3 0

FIJ Fault Injection 3 1

CCV Code Coverage 0 -

ERG Error Guessing 0 0

Student Helper

Name of test Number of

test cases

TC Failed

NRL Normal 5 1

NEG Negative 5 1

RND Random 5 0

BVA Boundary Value Analysis 5 5

EQP Eq. Partitioning 5 0

FIJ Fault Injection 5 1

CCV Code Coverage 0 -

ERG Error Guessing 0 0

ImageJ

Name of test Number of

test cases

TC Failed

NRL Normal 4 1

NEG Negative 4 1

RND Random 3 0

BVA Boundary Value Analysis 3 1

EQP Eq. Partitioning 3 0

FIJ Fault Injection 0 0

CCV Code Coverage 0 0

ERG Error Guessing 0 0

Page 61: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

61

LatexDraw

Name of test Number of

test cases

TC Failed

NRL Normal 4 0

NEG Negative 4 0

RND Random 3 0

BVA Boundary Value Analysis 3 0

EQP Eq. Partitioning 0 0

FIJ Fault Injection 0 0

CCV Code Coverage 0 -

ERG Error Guessing 0 0

JavaJusp

Name of test Number of

test cases

TC Failed

NRL Normal 3 1

NEG Negative 3 3

RND Random 3 1

BVA Boundary Value Analysis 3 1

EQP Eq. Partitioning 3 1

FIJ Fault Injection 3 0

CCV Code Coverage 0 -

ERG Error Guessing 3 3

Euro Badget

Name of test Number of

test cases

TC Failed

NRL Normal 4 0

NEG Negative 4 4

RND Random 4 2

BVA Boundary Value Analysis 3 1

EQP Eq. Partitioning 3 1

FIJ Fault Injection 0 0

CCV Code Coverage 0 -

ERG Error Guessing 0 0

Bank System

Name of test Number of

test cases

TC Failed

NRL Normal 4 0

NEG Negative 4 0

RND Random 3 2

BVA Boundary Value Analysis 3 2

EQP Eq. Partitioning 3 1

FIJ Fault Injection 2 0

CCV Code Coverage 1 -

ERG Error Guessing 2 2

TABLE 24. TCS AND TC FAILED FOR EACH TDT

Page 62: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

62

At the end we show in the table 25 the gender of faults that we met. The column in the middle tells us the fault type and with which TDT we discover it. Into Fault description column there is what happen during testing.

System Fault type Fault description Age Calculator

1.Wrong result(neg) 2.Wrong data input (rand) 3. Wrong data input(2xFI)

Given symbols as input we have wrong result. Given symbols as year input it is impossible insert the date “December,31”. Error result when we insert data field year.

Image Processing

1.No response(pos) 2.No response(Rand) 3.crash(neg) 4.crash(2xneg) 5.application freeze (FI)

When we click on exit button nothing happen. Impossible save an image. We click on “rotate left” when the image is not loaded. We click on “sharpen” or “diffusion” when the image is not loaded. We change code and the system cannot finish image conversion.

Bomberman 1.No stop condition(pos) 2.wrong functionality(rand)

There is no stop condition. System doesn’t work

Bejeweled

1.application freeze (pos) 2.crash (rand) 3.crash (ep) 4.wrong status (bva)

We try to save the game. In setting we put “tud4ù/” into decrease factor field. In setting we put “100.000” into difficult level field. We select timed game but the system accepts “0” as decrease filed.

DraW

1.Wrong calculation (negative test) 2.Wrong calculation (random input) 3.wrong result (bva) 4. wrong result (ep) 5. wrong result(fi)

Given negative input the OK button works. Given negative input the OK button works. Given -1 or 0 as a facet of a polygon, system draws it. Given a value less than 0 as a facet of a polygon, system draw it. We expect an alert from the system but it changes shape size normally.

EuroBudget

1.Wrong status(neg) 2.Wrong status(neg) 3.Wrong status(neg) 4.Wrong status(neg) 5.Wrong status(rand) 6.Wrong status(rand) 7.wrong calculation(ep) 8.wrong calculation(ep)

System updates transaction money of €0. A new account is created by empty fields insertion. A new bank is created using “,” as input for each fields. A new account is created by inserting wrong input. IBAN: gsnfdi£ accepted as input. Fd/bi£qv% accepted as address input An initial amount of -400 is accepted as input when a new account is created. An initial amount of -1 is accepted as input when a new account is created.

Page 63: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

63

System Fault type Fault description JMoney

1.No data check(Neg) 2.No data check(Rand) 3.No data check(3xbva) 4.wrong functionality (FI)

All negative input are accepted as input. All random input are accepted as input. All boundary values are accepted but the system cuts a part of it. The PDF functionality doesn’t work after user account creation.

Clean Sheet

1.wrong functionality (pos) 2.wrong functionality (pos) 3.wrong result(pos) 4.wrong result (rand) 5.wrong functionality (rand) 6.data manipulation (2xbva) 7.system failure (fi)

Print functionality doesn’t works. Delete sheet functionality doesn’t work, it is only possible add a new sheet. We select 4columns of the first row but the system paint the whole row when we click the button “paint”. Wrong result after clicking “Apply” button. It is possible to use “Save“ functionality even if there are no opened file. Inserting boundary value the system modify it. After change code it is not possible have an addition between two numeric cells.

IRC Client

1.crash(3xPos) 2.wrong data(Neg) 3.crash(2xNeg)

Anytime we try to reach the server the application freeze. No data input handling. Anytime we try to reach the server the application freeze.

LaTeXDraw No faults/failures ---- ImageJ

1.wrong result (pos) 2.wrong result(neg) 3.wrong result(bva)

Reload functionality doesn’t work. It is possible copy empty shape. Values out of boundary are accepted.

Bank System

1.wrong result (Rand) 2.no response(Rand) 3.wrong result(2xbva) 4.no response (ep) 5.wrong result(2xerror guessing)

Account created by inserting characters and symbols in account number field. Button “Save” doesn’t work by inserting a wrong value into Dep. Amount filed. System accepts values out of boundary. The button “Save” doesn’t work. It is possible create a new account with date filed less than actual day.

UMLet

1.No response (pos) 2.Wrong data input (Neg) 3.wrong data input (Neg) 4.wrong data input (Neg) 5.No response(rand) 6.unexpected result (rand)

Search field doesn’t works. Double start point accepts as input in uml. Triple notes overlap accepted. The system accept different uml standard in the same work space. Search field doesn’t works with random values. Pseudo code manipulation visible at system level.

Student helper

1.crash (pos) 2.crash(Neg) 3.wrong value(5xbva) 4.unexpected status(fi)

System doesn’t accept V&V as exam input name. System doesn’t accept “&” as character of name or password field. System accepts values out of bound. System works normally after code changing.

TABLE 25. SYSTEM FAULTS DESCRIPTION

Page 64: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

64

8. FUTURE WORK AND CONCLUSION

In this work, we describe how we systematically tested a series of open source system applying test techniques at defined levels (see section 5). We also show systems that often lack fundamental testing and main problems before to use an open source systems. We created an Open Source System classification and the results are showed in tab. 20. We also have statistical tables as result of our experiment. In future works, should be interesting more TDTs on our systems and even add integration testing to have a great view of statistics. We also will improve Code Coverage technique by adding test case to achieve 100% coverage and we describe which areas of a system are not-reachable and we’ll record failure and determine type and fault.

Page 65: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

65

A. APPENDIX “THE HAT”

This system is used to generate a random numbers, names, symbols and everything written in a file.

Figure 18. The Hat's GUI

Just like drawing names from a hat to determine a random input for a group words. just input a list of words either manually or by importing from a text file, then let the program randomly re-order the entire list automatically or pick any number of random words, complete with cool animation and optional sound effects. It is also useful for teachers to assign partners for doing projects, games, etc.

Page 66: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

66

B. APPENDIX “CASE STUDY”

In this appendix we are going to show all the details about our case study (to read the system’s overview, see the section 6). In the following pages, we are going to show a list of each discovered requirement and the explanation of data domain for all the fields/requirements.

Req. 1: Bank account creation � Integer values (b)2

o Account No. o Dep. Amount

� Alphanumeric values (a, b, c, d, e) o Personal Name

� Date (e) o Date: It is expressed like month, day and year. Selection only using

listbox;

Req. 2: Deposit money � Integer values (b)

o Account No. o Dep. Amount

� Alphanumeric values (a, b, c, d, e) o Personal Name

� Date (e) o Date: It is expressed like month, day and year. Selection only using listbox;

Req. 3: WithDraw money

� Integer values (b) o Account No. o With. Amount

� Alphanumeric values (a, b, c, d, e) o Personal Name

� Date (e) o With Date: It is expressed like month, day and year. Selection only using

listbox;

Req. 4: Search Customer We have four different kinds of customer research:

1. Direct search of a specific customer and the only requested parameter is

2 See section “How To Select Values” to see all the possible domain.

Page 67: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

67

� Account No.: Available; � Personal Name: Not available; � Last Transaction: Not available; � Balance: Not available.

2. Direct search of a specific customer and the only requested parameter is

� Account No.: Not Available; � Personal Name: Available; � Last Transaction: Not available; � Balance: Not available.

We cannot access to “Not Available” values. They will be automatically fixed after account number/Personal name match.

3. View the entire customer one-by-one. 4. View of all customer using default functionalities.

Req. 5: Delete Account Holder

� Integer Values (b) o Account No.

Req. 6: Print � Alphanumeric values (a, b, c, d, e)

o Account No. There are some problems in this print field because Account No. is an integer field (b) but it is possible insert values of other domain (a, b, c, d, e).

Req. 7: Utilities

� Help: Activation by click; � Change Bground color: Activation by click; � Change Layout style: Activation by click; � Apply theme: Activation by click; � Close Active Window Activation by click; � Close All Window: Activation by click; � Shortcut keys: Data input is given by keys combination (see Fig

19 and Fig 20)

Page 68: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

68

Menu Name Shortcut Key of Menu File Alt + F (To Open File Menu). Edit Alt + E (To Open Edit Menu). View Alt + V (To Open View Menu). Option Alt + O (To Open Option Menu). Window Alt + W (To Open Window Menu). Help Alt + H (To Open Help Menu).

Figure 19. Shortcut Keys for Menu

Option Name Option's Shortcut Key

New Account Ctrl + N Print Customer Balance Ctrl + P Quit Bank System? Ctrl + Q Deposit Money Ctrl + T Withdraw Money Ctrl + W Delete Customer Ctrl + D Search Customer Ctrl + S View One By One Ctrl + O View All Customer Ctrl + A Change Background Color Ctrl + B Help Contents Ctrl + H Help on Shortcuts... Ctrl + K About BankSystem Ctrl + C

Figure 20. Shortcut Keys for Option

After this review and study this report on Data input domain we can conclude that there are three main domains for Data Input:

1. Integer Number values (b); 2. Alphanumeric values (a, b, c, d, e); 3. Date [month][day number][year] (e).

Page 69: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

69

In this section we are going to show the TCs table with all the TCs done on the system “Bank System”.

Test

Case

ID

Test

Techni

que

and

Level

Input

Comment

Expected

Value

Actual Value

Ti

me

Verdict

BKS

NRL

S01

Normal

-

System

• Run the system into

the folder:

C:\Documents and

Settings\user\Deskt

op\Complete_B1682

4812102003

• File -> Open new

account

• Fill in the following

fields

• Account No: 123

• Person Name: Test

• Deposit Date: April –

28 – 2009

• Dep. Amount: 1000

• And click the button

Save

A new account

is created.

Account created.

Image rotated of

90 degree

8

mi

n

Passed

BKS

NRL

I02

Normal

-

System

• Run the system into

the folder:

C:\Documents and

Settings\user\Deskt

op\Complete_B1682

4812102003

• Edit -> Deposit

money

• Fill in the following

fields

• Account No: 123

• Deposit Date: April –

29 – 2009

• Dep. Amount: 100

• And click the button

Save

Add

money in

the

account

number

123

Add 100 in the

account

number 123

Result: The file is

updated

successfully.

Image rotated of

90 degree

8

mi

n

Passed

Page 70: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

70

Test

Case

ID

Test

Techni

que

and

Level

Input

Comment

Expected

Value

Actual Value

Ti

me

Verdict

BKS

NRL

I03

Normal

-

System

• Run the system into

the folder:

C:\Documents and

Settings\ user

\Desktop\Complete_

B16824812102003

• Edit -> Withdraw

money

• Fill in the following

fields

• Account No: 123

• Deposit Date: April –

29 – 2009

• With. Amount: 100

And click the button

Save

Withdraw

money on

the

account

number

123

Withdraw 100

on the account

number 123

Result: The file is

updated

successfully.

Image rotated of

90 degree

8

mi

n

Passed

BKS

NRL

S04

Normal

-

System

• Run the system into

the folder:

C:\Documents and

Settings\ user

\Desktop\Complete_

B16824812102003

• Edit -> delete

Customer

• Insert into the field

Account No.: 123

• Click the button

Delete and after that

click the button Si.

Account

number 123 is

deleted.

Account number

123 deleted.

Image rotated of

90 degree

9

mi

n

Passed

Page 71: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

71

Test

Case

ID

Test

Techni

que

and

Level

Input

Comment

Expected

Value

Actual Value

Ti

me

Verdict

BKS

NEG

S05

Negativ

e -

System

• Run the system into

the folder:

C:\Documents and

Settings\user\Deskt

op\Complete_B1682

4812102003

• File -> Open new

account

• Fill in the following

fields

• Account No: Test

Try to

insert

wrong

values into

Account

No. filed.

Domain: a.

Error handling

or impossible

to write

characters into

it.

Impossible to

write characters

into Account No.

field.

7

mi

n

Passed

BKS

NEG

I06

Negativ

e -

System

• Run the system into

the folder:

C:\Documents and

Settings\ user

\Desktop\Complete_

B16824812102003

• Edit -> Deposit

money

• Fill in the following

fields Dep. Amount:

- 100

Insert on

Dep.

Amount

field a

negative

value.

Domain: d

Impossible to

write negative

values

Value not

allowed

8

mi

n

Passed

BKS

NEG

I07

Negativ

e -

System

• Run the system into

the folder:

C:\Documents and

Settings\ user

\Desktop\Complete_

B16824812102003

• Edit -> Withdraw

money

• Fill in the following

fields

With. Amount: -

100

Insert on

With.

Amount

field a

negative

value.

Domain: d

Impossible to

write negative

values

Value not

allowed

8

mi

n

Passed

Page 72: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

72

Test

Case

ID

Test

Techni

que

and

Level

Input

Comment

Expected

Value

Actual Value

Ti

me

Verdict

BKS

NEG

I08

Negativ

e -

System

• Run the system into

the folder:

C:\Documents and

Settings\ user

\Desktop\Complete_

B16824812102003

• File -> Print

customer balance

• A new window is

showed and inserts

the value “asd” and

clicks the button Ok.

Insert

characters

into the

filed

Account

No. instead

numbers.

It is

impossible

have an

account

with

characters

because

only

numbers

are

allowed.

Domain: a

Alert message.

Result: Account

No. doesn’t exist.

Image rotated of

90 degree

8

mi

n

Passed

BKS

RND

I09

Rando

m -

System

• Run the program

C:\Programmi\The

Hat\th.exe

• File -> Import names

from file and load

the file

“C:\Documents and

Settings\ user

\Desktop\The Hat –

Random file.txt”

• Click on Pick

individual name and

click the button Pick.

• Result is:

asd1asd2asd3asd##

é°§11

• Run the system into

the folder:

C:\Documents and

Settings\ user

\Desktop\Complete_

B16824812102003

• Edit -> delete

Customer

• Insert into the field

Account No.:

asd1asd2asd3asd##

é°§11 and click the

button Delete

Insert a

random

input value

into the

field

Account

No.

Domain: a,

b, d

Impossible to

write

“asd1asd2asd3

asd##é°§11”

into Account

No. field or an

alert is showed

because there

are no account

with that value.

Result: an alert is

showed (Wrong

value)

Image rotated of

90 degree

11

mi

n

Passed

Page 73: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

73

Test

Case

ID

Test

Techni

que

and

Level

Input

Comment

Expected

Value

Actual Value

Ti

me

Verdict

BKS

RND

I10

Rando

m -

System

• Run the program

C:\Programmi\The

Hat\th.exe

• File -> Import names

from file and load

the file

“C:\Documents and

Settings\ user

\Desktop\The Hat –

Random file.txt”

• Click on Pick

individual name and

click the button Pick.

• Result is:

1GFAG43vq43rEé*ç°

_°__:;;Ar32123

• Run the system into

the folder:

C:\Documents and

Settings\ user

\Desktop\Complete_

B16824812102003

• File -> Open new

account

• Fill in the following

fields

• Account No:

1GFAG43vq43rEé*ç°

_°__:;;Ar32123

• Person Name:

1GFAG43vq43rEé*ç°

_°__:;;Ar32123

• Deposit Date: April –

28 – 2009

• Dep. Amount:

1GFAG43vq43rEé*ç°

_°__:;;Ar32123

• And click the button

Save

Insert the

random

input value

“1GFAG43

vq43rEé*ç

°_°__:;;Ar32

123” into

the fileds

used to

create a

new

account

Domain: a,

b, d

Impossible to

create a new

account or an

alert is shown.

Account created.

12

mi

n

Failed

Page 74: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

74

Test

Case

ID

Test

Techni

que

and

Level

Input

Comment

Expected

Value

Actual Value

Ti

me

Verdict

BKS

RND

I11

Rando

m -

System

• Run the program

C:\Programmi\The

Hat\th.exe

• File -> Import names

from file and load

the file

“C:\Documents and

Settings\ user

\Desktop\The Hat –

Random file.txt”

• Click on Pick

individual name and

click the button Pick.

• Result is:

L06000000010BEST

WAREHOUSE, LLC

• Run the system into

the folder:

C:\Documents and

Settings\ user

\Desktop\Complete_

B16824812102003

• Edit -> Deposit

Money

• Type 1879

(Standard Account)

into the field

Account No. and

Copy-and-Paste

“L06000000010BES

T WAREHOUSE,

LLC” into the filed

Dep. Amount.

• And click the button

Save.

Insert a

wrong

value into

Dep.

Amount

field.

Domain: a,

b

Alert message

because it is

impossible to

add money to

Account

number 1879

because the

money value is

wrong.

The button Save

doesn’t works.

15

mi

n

Failed

Page 75: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

75

Test

Case

ID

Test

Techni

que

and

Level

Input

Comment

Expected

Value

Actual Value

Ti

me

Verdict

BKS

BVA

I12

Bounda

ry

Value

Analysi

s -

System

• Run the system into

the folder:

C:\Documents and

Settings\ user

\Desktop\Complete_

B16824812102003

• Edit -> Deposit

Money

• Type 1879

(Standard Account)

into the field

Account No. and

Copy-and-Paste “-1”

into the filed Dep.

Amount.

• And click the button

Save.

Boundary

values are

[-1, 0, 1]

Impossible

subtract or

insert a

negative value

into Dep.

Amount filed.

Value allowed

and withdraw

from the Dep.

Amount of

Account No.

1879

9

mi

n

Failed

BKS

BVA

I13

Bounda

ry

Value

Analysi

s -

System

• Run the system into

the folder:

C:\Documents and

Settings\ user

\Desktop\Complete_

B16824812102003

• Edit -> Deposit

Money

• Type 1879

(Standard Account)

into the field

Account No. and

Copy-and-Paste “0”

into the filed Dep.

Amount.

• And click the button

Save.

Boundary

values are

[-1, 0, 1]

Impossible add

0 money into

Dep. Amount

filed.

Value allowed.

9

mi

n

Failed

BKS

BVA

I14

Bounda

ry

Value

Analysi

s -

System

• Run the system into

the folder:

C:\Documents and

Settings\ user

\Desktop\Complete_

B16824812102003

• Edit -> Deposit

Money and type

1879 (Standard

Account) into the

field Account No.

and Copy-and-Paste

“1” into the filed

Dep. Amount. and

click the button save

Boundary

values are

[-1, 0, 1]

It is possible

type the value

1 into Dep.

Amount.

Dep. Amount

value allowed.

9

mi

n

Passed

Page 76: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

76

Test

Case

ID

Test

Techni

que

and

Level

Input

Comment

Expected

Value

Actual Value

Ti

me

Verdict

BKS

EQPI

15

Eq.

Partitio

ning -

System

• Run the system into

the folder:

C:\Documents and

Settings\ user

\Desktop\Complete_

B16824812102003

• Edit -> Deposit

Money

• Type Account No.:

1879 and Dep.

Amount: -10000

Inserting a

negative

number

into Dep.

Amount

filed.

Domain:

out of b

Alert message

when the value

is typed or

impossible to

write negative

number.

Impossible to

write negative

number.

9

mi

n

Passed

BKS

EQPI

16

Eq.

Partitio

ning -

System

• Run the system into

the folder:

C:\Documents and

Settings\ user

\Desktop\Complete_

B16824812102003

• Edit -> Deposit

Money

• Type Account No.:

1879 and Dep.

Amount: 10000

• And click the button

Save.

Insert a

value into

the right

range.

Domain: b

Value allowed

by system.

Result: The file is

updated

successfully.

9

mi

n

Passed

BKS

EQPI

17

Eq.

Partitio

ning -

System

• Run the system into

the folder:

C:\Documents and

Settings\ user

\Desktop\Complete_

B16824812102003

• Edit -> Deposit

Money

• Type Account No.:

1879 and Dep.

Amount:

2999999999

• And click the button

Save.

Insert a

value up

range.

Domain:

out of b

Alert message

because it is

impossible to

insert the value

2999999999

because it is

out of range.

The button Save

doesn’t work.

9

mi

n

Failed

Page 77: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

77

Test

Case

ID

Test

Techni

que

and

Level

Input

Comment

Expected

Value

Actual Value

Ti

me

Verdict

BKS

ERG

S18

Error

Guessin

g

-

System

• Run the system into

the folder:

C:\Documents and

Settings\ user

\Desktop\Complete_

B16824812102003

• File -> Open new

account

• Fill in the following

fields

• Account No: 2

• Person Name: Test

• Deposit Date:

January – 1 – 2000

• Dep. Amount: 1000

And click the button

Save

Create a

new

account

with date

filed less

than actual

day

Actual day:

April 28

2009

Impossible to

create a new

account with a

date older than

Actual day.

Account created.

8

mi

n

Failed

BKS

ERG

S19

Error

Guessin

g

-

System

• Run the system into

the folder:

C:\Documents and

Settings\ user

\Desktop\Complete_

B16824812102003

• File -> Open new

account

• Fill in the following

fields

• Account No: 2

• Person Name: Test

• Deposit Date:

February – 31 –

2009

• Dep. Amount: 1000

And click the button

Save

Create a

new

account

with date

filed less

than actual

day and

also 31

February

doesn’t

exist.

Actual day:

April 28

2009

Impossible to

create a new

account with a

date like 31

February

2009

Account created.

8

mi

n

Failed

TABLE 26. BANK SYSTEM TCS TABLE

Page 78: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

78

Fault Injection Testing First affected function is NewAccount.java. We replace source code from line 86 to 93 Test Case ID: BKSFIJC20 Expected result: We want change data parameters during account creation.

Figure 21. Selected source code in BKSFIJC20

FIGure 22. AFFECTED SOURCE CODE IN BKSFIJC20

Result: Data parameters change. It’s possible to select only days from 1 to 20 and year from 2014 to 2015. Time: 37 minutes Verdict: Pass Comment: From source code we can see some logic errors. For us is better change routine and have an automatic routine to check data and avoid possible error such as: “April, 31, 2015”. Code coverage of this TC is showed below (figure 23)

Page 79: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

79

Code

coverage

BKSFIJC20

figure 23. Code coverage BKSFIJC20

Second affected function is FindAccount.java and in particular Findrec() function. We replace source code from line 161 to 167 Test Case ID BKSFIJC21 Expected result: We want change find functionality. We want restrict range into searching holder feature.

FIGure 24. SELECTED SOURCE CODE IN BKSFIJC21

Page 80: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

80

Figure 25. Affected source code in BKSFIJC21

Result: Data parameters change. It’s possible to select only Account No showed in the first row of the “View All Account Holders” (see Fig. BKSFIJC21 result).

FIGure 226. RESULT FROM BKSFIJC21 AT SYSTEM LEVEL

Time: 37 minutes Verdict: Pass Comment: Search function work correctly. Code coverage of this TC is showed below (Table Code Coverage 2)

Page 81: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

81

Table Code Coverage 2

Code

coverage

BKSFIJC21

Figure 27. Code coverage BKSFIJC21

Page 82: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

82

REFERENCES

[1] Craig R.D., Jaskiel S.P., “Systematic Software Testing”, Artech House, ISBN 1580535089, 2002

[2] Copeland L., “A Practitioner’s Guide to Software Test Design”, Artech House, ISBN 1-58053-791-X, November 2003

[3] Coverage tool for Eclipse http://www.eclemma.org/

[4] http://csheets.sourceforge.net/

[5] http://www.planet-source-code.com/vb/scripts/ShowCode.asp?txtCodeId=5869&lngWId=10

[6] http://www.Planet-Source-Code.com/xq/ASP/txtCodeId.2778/lngWId.2/qx/vb/scripts/ShowCode.htm

[7] http://www.opensource.org/docs/osd

[8] Computer safety, reliability and security - 20th international conference, SAFECOMP 2001, Budapest, Hungary, September 26-28, 2001

[9] Beizer B., “BLACK BOX TESTING – Techniques for functional testing of software and system”, Wiley, ISBN 0471120944, 9780471120940, 1995

[10] http://www.sharewareplaza.com/The-Hat-download_10201.html

[11] Watkins J., “Testing IT” - Cambridge University Press, ISBN: 052179546X, 2001.

[12] Burnstein I., “Practical software testing : a process oriented approach”, Springer professional computing, ISBN 0-387-95131-8, New York ; Springer, cop. 2003

[13] Voas J. and McGraw G., "Software Fault Injection: Innoculating Programs Against Errors", Published by John Wiley & Sons, ISBN 0-471-18381-4, 416 pages, 1997

[14] Lewis W.E., “Software testing and continuous quality improvement”, Auerbach publications, ISBN 9780849398339

[15] Vij K., Feng W., "Boundary Value Analysis using Divide-and-Rule Approach" - Department of Computer Science/Studies, Trent University - Peterborough, ON Canada K9J 7B8

[16] Broekman B., Notenboom E., "Testing embedded software" - Addison-Wesley, ISBN: 0321159861 - 348 s., 2002.

[17] Graham D., Veenendaal E.V., "Foundations of Software Testing: Istgb Certification Edition Updated for Istqb Foundation", Gardners Books, ISBN: 9781844809899 1844809897, 2008.

[18] Andress A. - "Surviving Security: How to Integrate People, Process, and Technology" - Auerbach Publications; 1 edition, 2003.

[19] Pol M., Teunissen R.,van Veenendaal E., "Software testing", Addison-Wesley, ISBN 0-201-74571-2, 2002.

[20] UMLet: http://www.umlet.com/

[21] Jakobs, Kai, "Information technology standards and standardization : a global perspective" Hershey, USA : Idea Group Publishing, ISBN 1878289705,. 1999.

[22] Gill J., Swann P., "Corporate Vision and Rapid Technological Change", Routledge, ISBN: 9780415091350 , 1993.

[23] Heathcote, P. M. (Pat M.), "'A' level computing" 5th ed., Payne-Gallway Publishers Ltd, ISBN: 1904467520, 2004.

[24] Jmoney: http://sourceforge.net/project/showfiles.php?group_id=17482

[25] Euro Budget: http://sourceforge.net/project/showfiles.php?group_id=48615

[26] DraW: http://sourceforge.net/project/showfiles.php?group_id=163136

[27] Bomberman: http://users.ugent.be/~jpwinne/bombman.html

[28] ImageJ: http://rsbweb.nih.gov/ij/index.html

Page 83: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

83

[29] IEEE Standard 610.12-1990, “IEEE Standard Glossary of Software Engineering Terminology”, 1991

[30] http://latexdraw.sourceforge.net/

[31] Bejeweled: http://www.planet-source-code.com/vb/scripts/ShowCode.asp?txtCodeId=5259&lngWId=10

[32] IRC Client: http://www.planet-source-code.com/vb/scripts/ShowCode.asp?txtCodeId=810&lngWId=10

[33] Eldh S., Hansson H., Punnekkat S., Anders Ericsson AB, “A framework for Comparing Efficiency, Effectiveness and Applicability of Software Testing Technique”, Mälardalen University, Proc. TAIC,IEEE, London, UK, 2006.

[34] Tiemann, Michael, “History of the OSI” - Retrieved on 2008.

[35] Peres, Bruce, ”Open Sources: Voices from the Open Source Revolution” -1st Edition January - O’Reilly Media. 280 pages, 1999.

[36] Rerych M., “Wasserfallmodell > Entstehungs context”, Institut für Gestaltungs- und Wirkungsforschung, TU-Wien. Accessed on line November 28, 2007.

[37] Boehm B, “A Spiral Model of Software Development and Enhancment”, “Computer”, “IEEE”, 21(5):61-72, May 1988

[38] JavaJUSP: http://www.samlingar.com/JavaJUSP/

[39] Student Helper: http://www.planet-source-code.com/vb/scripts/ShowCode.asp?txtCodeId=1570&lngWId=10

[40] Bank System: http://www.planet-source-code.com/vb/scripts/ShowCode.asp?txtCodeId=3994&lngWId=2

[41] Myers, Glenford J., "The Art of Sofhvare Testing", John Wiley & Sons, Inc, 1979.

[42] Marcel Dix M., Hofmann H.D., "Automated Software Robustness Testing Static and Adaptive Test Case Design Methods" - ABB Corporate Research Center, Germany

[43] Weyuker E. J., "Can We Measure Software Testing Effectiveness?", IEEE Software Metrics Symposium, 1993, pp.100-101

[44] Rothermel G., Harrold M.J., "Analyzing Regression Test Selection Techniques", IEEE Trans. On Software Eng., Vol 22, No.8, 1996

[45] Bach J., "Exploratory Testing", in The Testing Practitioner, Second ed., E. van Veenendaal Ed., Den Bosch: UTN Publishers, 2004, pp. 253-265.

[46] Kaner C., Bach J. and Pettichord B., Lessons Learned in Software Testing, New York: John Wiley & Sons, Inc., 2002.

[47] A. Tinkham and C. Kaner, "Exploring Exploratory Testing", 2003, Accessed 2005 04/25, http://kaner.com/pdfs/ExploringExploratoryTesting.pdf

[48] Williams T.W., Mercer M. R., Mucha J. P., Kapur R., "Code Coverage, What Does It Mean in Terms of Quality?", IEEE PROCEEDINGS Annual RELIABILITY and MAINTAINABILITY Symposium, 2001

[49] Tui- Ying Jiang, Chien-Nan Jimmy Liu, Jing- Yang Jou, "An Observability Measure to Enhance Statement Coverage Metric for Proper Evaluation of Verification Completeness", Design Automation Conference, 2005. Proceedings of the ASP-DAC 2005. Asia and South Pacific, Volume 1, 18-21 Jan. 2005 Page(s):323 - 326.

[50] Saif-ur-Rehman Khan, Aamer Nadeem, "TestFilter: A Statement-Coverage Based Test Case Reduction Technique", Multitopic Conference, 2006. INMIC '06. IEEE 23-24 Dec. 2006 Page(s):275 – 280

[51] Voas J.M, Miller K.W., "Using fault injection to assess software engineering standards", Montreal, Quebec, Canada August 21-August 25

[52] J. Voas, "Fault Injection for the Masses," Computer, vol. 30, pp. 129–130, 1997.

Page 84: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

84

[53] Beizer, Software Testing Techniques (Second Edition), Van Nostrand Reinhold, ISBN 0-442-20672-0, 1990

[54] Reid, S.C.; "An empirical analysis of equivalence Partitioninging, boundary value analysis and random testing", Software Metrics Symposium, 1997. Proceedings., Fourth International, 5-7 Nov. 1997 Page(s):64 – 73

[55] Murnane, T.; Reed, K.; Hall, R.; "On the Learnability of Two Representations of Equivalence Partitioningand Boundary Value Analysis", Software Engineering Conference, 2007. ASWEC 2007. 18th Australian, 10-13 April 2007 Page(s):274 - 283.

[56] Kamde, P.M.; Nandavadekar, V.D.; Pawar, R.G.; "Value of Test Cases in Software Testing", Management of Innovation and Technology, 2006 IEEE International Conference on, Volume 2, 21-23 June 2006 Page(s):668 - 672.

[57] Graham D., "Requirements and Testing:Seven Missing-Link Myths", IEEE SOFTWARE, September/October 2002

[58] Itkonen J., Rautiainen K., "Exploratory Testing: A Multiple Case Study", IEEE - Helsinki University of Technology, Software Business and Engineering Institute, 2005

[59] Hsueh M.C., Tsai T.K., Iyer R.K., "Fault Injection Techniques and Tools", IEEE, University of Illinois at Urbana - Champaign, April 1997.

Stuart C. Reid, “An Empirical Analysis of Equivalence Partitioninging, Boundary Value Analysis and Random Testing”, Cranfield University Royal Military College of Science, Shrivenham, Swindon, Wilts SN6 8LA, UK

[60] Arlat, J.; Aguera, M.; Amat, L.; Crouzet, Y.; Fabre, J.-C.; Laprie, J.-C.; Martins, E.; Powell, D.;” Fault injection for dependability validation: a methodology and some applications”, Software Engineering, IEEE Transactions on Volume 16, Issue 2, Feb. 1990 Page(s):166 – 182

[61] John Watkins, “Testing IT”, pag.16,18

[62] Ilene Burnstein, “Practical software testing”,

[63] R.D. Craig, S.P. Jaskiel, “Systematic Software Testing” – M.B.A.Principal consultant.P175

[64] Dorothy Graham, Erik Van Veenendaal, Isabel Evans, ”Foundations of Software Testing”

Page 85: Comparing Test Design Techniques for Open Source Systemsmdh.diva-portal.org/smash/get/diva2:302237/FULLTEXT01.pdf · Comparing Test Design Techniques for Open Source Systems 3 Abstract

Comparing Test Design Techniques for Open Source Systems

85