FaMa Test Suite v1.0

77
FaMa Test Suite v1.0 Sergio Segura, David Benavides and Antonio Ruiz-Cort´ es {sergiosegura,benavides,aruiz}@us.es Applied Software Engineering Research Group University of Seville, Spain February 2009 Technical Report ISA-09-TR-01

Transcript of FaMa Test Suite v1.0

Page 1: FaMa Test Suite v1.0

FaMa Test Suite v1.0

Sergio Segura, David Benavides and Antonio Ruiz-Cortes

{sergiosegura,benavides,aruiz}@us.es

Applied Software Engineering Research GroupUniversity of Seville, Spain

February 2009

Technical Report ISA-09-TR-01

Page 2: FaMa Test Suite v1.0

This report was prepared by the

Applied Software Engineering Research Group (ISA)Department of computer languages and systemsAv/ Reina Mercedes S/N, 41012 Seville, Spainhttp://www.isa.us.es/

Copyright c©2009 by ISA Research Group.

Permission to reproduce this document and to prepare derivative works from this docu-ment for internal use is granted, provided the copyright and ’No Warranty’ statementsare included with all reproductions and derivative works.

NO WARRANTYTHIS ISA RESEARCH GROUP MATERIAL IS FURNISHED ON AN ’AS-IS’ BASIS.ISA RESEARCH GROUP MAKES NO WARRANTIES OF ANY KIND, EITHEREXPRESSED OR IMPLIED, AS TO ANY MATTER INCLUDING, BUT NOT LIM-ITED TO, WARRANTY OF FITNESS FOR PURPOSE OR MERCHANTIBILITY,EXCLUSIVITY, OR RESULTS OBTAINED FROM USE OF THE MATERIAL.

Use of any trademarks in this report is not intended in any way to infringe on therights of the trademark holder

Support: This work has been partially supported by the European Commission(FEDER) and Spanish Government under CICYT project Web-Factories (TIN2006-00472) and by the Andalusian Government under project ISABEL (TIC-2533).

Page 3: FaMa Test Suite v1.0

Contents

1 Introduction 21.1 Motivating scenario . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2 Preliminaries 52.1 Feature models . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.2 Automated analyses of feature models . . . . . . . . . . . . . . . 62.3 Black-box testing techniques . . . . . . . . . . . . . . . . . . . . . 8

3 Test suite design 103.1 Identification of inputs and outputs . . . . . . . . . . . . . . . . . 103.2 Inputs selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

3.2.1 Feature models . . . . . . . . . . . . . . . . . . . . . . . . 113.2.2 Products . . . . . . . . . . . . . . . . . . . . . . . . . . . 143.2.3 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

3.3 Inputs combination . . . . . . . . . . . . . . . . . . . . . . . . . . 163.4 Test cases report . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

4 Test suite evaluation and refinement 194.1 Preliminary evaluation on the motivating scenario . . . . . . . . 194.2 Evaluation using mutation testing . . . . . . . . . . . . . . . . . 194.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

5 Test suite summary and conclusions 255.1 Test suite summary . . . . . . . . . . . . . . . . . . . . . . . . . . 255.2 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

A Test cases 27A.1 Operation Void Feature Model . . . . . . . . . . . . . . . . . . . 27A.2 Operation Valid Product . . . . . . . . . . . . . . . . . . . . . . . 31A.3 Operation All Products . . . . . . . . . . . . . . . . . . . . . . . 48A.4 Operation Number of Products . . . . . . . . . . . . . . . . . . . 52A.5 Operation Variability . . . . . . . . . . . . . . . . . . . . . . . . . 56A.6 Operation Commonality . . . . . . . . . . . . . . . . . . . . . . . 59A.7 Operation Dead Features . . . . . . . . . . . . . . . . . . . . . . 67

i

Page 4: FaMa Test Suite v1.0

List of Figures

2.1 A sample feature model . . . . . . . . . . . . . . . . . . . . . . . 6

3.1 Equivalence partitions on feature models . . . . . . . . . . . . . . 133.2 Some possible combinations of mandatory and optional relation-

ships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143.3 Input feature models with dead features (grey features are dead) 143.4 Input products selection using partition equivalence and bound-

ary analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153.5 Input features selection using partition equivalence and boundary

analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

4.1 Void feature model included in the suite . . . . . . . . . . . . . . 204.2 Adequacy evaluation using mutating testing on CSPReasoner . . 224.3 Adequacy evaluation using mutating testing on BDDReasoner . . 224.4 Adequacy evaluation using mutating testing on SATReasoner . . 22

ii

Page 5: FaMa Test Suite v1.0

List of Tables

3.1 Analysis operations addressed in the suite . . . . . . . . . . . . . 113.2 Number of test cases designed using pairwise testing. . . . . . . . 173.3 Three of the test cases included in the suite . . . . . . . . . . . . 18

4.1 Results of the adequacy evaluation using mutation testing . . . . 23

5.1 General overview of the FaMa Test Suite . . . . . . . . . . . . . . 26

A.1 Operation VoidFM. Test cases . . . . . . . . . . . . . . . . . . . 31A.2 Operation ValidProduct. Test cases . . . . . . . . . . . . . . . . 47A.3 Operation AllProducts. Test cases . . . . . . . . . . . . . . . . . 52A.4 Operation Number of Products. Test cases . . . . . . . . . . . . 56A.5 Operation Variability. Test cases . . . . . . . . . . . . . . . . . . 59A.6 Operation Commonality. Test cases . . . . . . . . . . . . . . . . 67A.7 Operation DeadFeatures. Test cases . . . . . . . . . . . . . . . . 69

iii

Page 6: FaMa Test Suite v1.0

Abstract

Feature Models (FMs) are a key asset for variability and commonality manage-ment in software product lines. Analysis of FMs is rapidly gaining importance:new operations of analysis have been proposed, new tools are being developedto support those operations and different logical paradigms and algorithms areproposed to perform them. Implementing operations is far from trivial and eas-ily leads to errors and defects in current solutions. In this context, the lack ofspecific testing mechanisms is becoming a major obstacle affecting tools qualityand reliability. In this technical report, we present FaMa Test Suite, a set ofimplementation-independent test cases to validate the functionality of FM anal-yses. This is an efficient and handy mechanism to assess the functionality ofanalysis tools, detecting faults and improving their quality. In order to show theeffectiveness of our proposal, an evaluation of the suite using mutation testingon three open source FM analysis tools is reported.

1

Page 7: FaMa Test Suite v1.0

Chapter 1

Introduction

Software Product Line (SPL) engineering is a systematic approach to developfamilies of software products. Feature models [20] are widely used to representall products of a SPL in a single model in terms of features and relationshipsamong them. Analysing a feature model means observing its properties bymeans of analysis operations. Typical operations of analysis allow determiningwhether a feature model is void (i.e. it represents no products), whether itcontains errors (e.g. features that cannot be part of any product) or what is thenumber of products of the SPL represented by the model. Catalogues with anumber of analysis operations identified on feature models are reported in theliterature [3, 6, 8].

Many approaches have been proposed to automate the analysis of featuremodels. Most translate feature models into specific logic paradigms such aspropositional logic [2, 16, 22, 30, 33] description logic [15, 32] or constraint pro-gramming [7, 31]. Others propose ad-hoc algorithms and solutions to performthese analyses [10, 14, 25]. Additionally, there are tools supporting these analy-sis capabilities such as Feature Model Plug-in1, Pure::Variants2, AHEAD ToolSuite 3 and FaMa Framework4.

Feature model analysis tools commonly deal with complex data structuresand algorithms (e.g. FaMa framework contains more than 10.000 lines of code).This makes the implementation of an analysis far from trivial and easily lead toerrors increasing development time and reducing reliability of current solutions.The lack of specific testing mechanisms in this context also appears as a majorobstacle for engineers when trying to assess the functionality and quality of theirprograms.

In a previous work [29], we detailed our intention of designing a set of testcases to facilitate the testing of analysis operations. In this paper, we presentour first result on that direction: FaMa Test Suite. FaMa Test Suite is a set of

1http://gp.uwaterloo.ca/fmp/2http://www.pure-systems.com/3http://www.cs.utexas.edu/users/schwartz/ATS.html4http://www.isa.us.es/fama/

2

Page 8: FaMa Test Suite v1.0

implementation-independent test cases to validate the functionality of featuremodel analysis tools. These were both designed and evaluated using well-knowntechniques from the software testing community. Using these test cases, faultscan be rapidly detected reducing development time and improving the reliabilityand quality of analysis tools. Briefly, we next describe the main characteristicsof the suite:

• Operations tested. Current version of the FaMa Test Suite addresses 7 outof the 12 analysis operations on feature models identified in the literature[6, 8].

• Testing techniques. Four black-box testing techniques were used to de-sign test cases, namely: equivalence partitioning, boundary-value analysis,pairwise testing and error guessing.

• Test cases. The suite is composed by 190 test cases. Each test caseis designed in terms of the inputs (i.e. feature models and some otherparameters) and expected outputs of the analysis operations under test.

• Adequacy. We evaluated the effectiveness of our suite using mutationtesting as follows. Firstly, we generated hundreds of faulty versions (so-called mutants) of three open source analysis tools integrated into theFaMa framework. Then, we executed our suite against those mutants andcheck how many of them were detected by our test cases. As a result, thesuite identified 81.5% of the generated mutants what shows the feasibilityof our proposal.

The remainder of the paper is structured as follows: Chapter 2 presents fea-ture models, their analyses and black-box testing techniques in a nutshell. Adetailed description of how we designed our test cases is presented in Chapter3. Chapter 4 describes the adequacy evaluation and refinement of the suite. Fi-nally, we summarize the main aspects of our proposal and our main conclusionsin Chapter 5.

1.1 Motivating scenario

Consider the work of Batory in SPLC’05 [2], one of the seminal papers in thecommunity of automated analysis. The paper included a bug (later fixed5)in the mapping of a feature model to a propositional formula. The problemarises when dealing with alternative relationships whose parent feature is notmandatory. We implemented this wrong mapping into a mock analysis tool.The key point therefore is how testing the tool to identify the failure and thefault causing it. We could try checking the behaviour of the tool with standardmodels whose expected outputs were known beforehand. However, the defect

5ftp://ftp.cs.utexas.edu/pub/predator/splc05.pdf

3

Page 9: FaMa Test Suite v1.0

would remain latent at least until testing the tool with a problematic input (i.e.those containing alternative relationships whose parent is not mandatory).

As an example, consider the feature model in Figure 2.1. Using the mocktool, we got that the number of products represented by the model is 1792, whatis right. We then changed ’security’ feature from mandatory to optional andgot 3584 products (instead of the actual 2016), revealing the failure. However,note that faults could remain latent even when using problematic inputs. Forinstance, consider the operation to find out if a model is void (i.e. it representsno product). The execution of this operation in our mock tool did not identifythe fault in any of the cases considered previously.

Finally, note that even when revealing the failure, it would still be hard andtime-consuming to identify the source of the problem.

4

Page 10: FaMa Test Suite v1.0

Chapter 2

Preliminaries

2.1 Feature models

Feature Models [20] are used to model sets of software systems in terms of fea-tures and relationships among them (see Figure 2.1). A feature is an incrementin product functionality [3]. Feature models are widely used during the wholeproduct line development process. Hence, it is commonly accepted that featuremodels can be used in different stages of a software product line effort in or-der to produce other assets such as requirements documents [22], architecturedefinition [24] or pieces of code [4].

A feature model is visually represented as a tree-like structure in which nodesrepresent features, and edges illustrate the relationships among them. Figure 2.1shows a simplified example of a feature model representing an e-commerce SPL.The model illustrates how features are used to specify and build on-line shoppingsystems. The software of each application is determined by the features that itprovides. The root feature (i.e. E-Shop) identifies the SPL. The relationshipsbetween a parent feature and its child features can be divided into:

• Mandatory. If a child feature is mandatory, it is included in all productsin which its parent feature appears. For instance, every on-line shoppingsystem in our example must implement a catalogue of products.

• Optional. If a child feature is defined as optional, it can be optionallyincluded in all products in which its parent feature appears. For instance,banners is defined as an optional feature.

• Alternative. A set of child features are defined as alternative if only onefeature can be selected when its parent feature is part of the product. Inour SPL, a shopping system has to implement high or standard securitypolicy but not both in the same product.

• Or-Relation. A set of child features are said to have an or-relationshipwith their parent when one or more of them can be included in the prod-

5

Page 11: FaMa Test Suite v1.0

E-Shop

Search

Basic Advanced

Catalogue

Info

Image Price

Description

Security

MediumHigh

Payment

PCBank DraftOffers Mobile

GUI

Credit Card

Visa American Express

Mandatory

Optional

Alternative

Or

Requires

Excludes

Banners

Figure 2.1: A sample feature model

ucts in which its parent feature appears. A shopping system can imple-ment several payment modules: bank transfer, credit card or both of them.

Notice that a child feature can only appear in a product if its parent featuredoes. The root feature is a part of all the products within the SPL. In additionto the parental relationships between features, a feature model can also containcross-tree constraints between features. These are typically of the form:

• Requires. If a feature A requires a feature B, the inclusion of A in aproduct implies the inclusion of B in such product. On-line shopping sys-tems accepting payments with credit card must implement a high securitypolicy.

• Excludes. If a feature A excludes a feature B, both features cannot bepart of the same product. Shopping systems implementing a mobile GUImust not include support for banners.

Feature models were first introduced as a part of FODA (Feature-OrientedDomain Analysis) method back in 1990 [20]. Since then, feature modellinghas been widely adopted by the software product line community and someextensions have been proposed. Currently, two main groups of feature modelnotations are used in the literature: basic feature models (a.k.a. Feature-RSEB)[8, 18] and cardinality-based feature models [12]. We refer the reader to [28] fora detailed survey on the different feature model languages.

2.2 Automated analyses of feature models

The automated analysis of feature models deals with the computer–aided ex-traction of information from feature models. From the information obtained,marketing strategies and technical decisions can be derived [6]. Catalogues witha number of analysis operations identified on feature models are reported in the

6

Page 12: FaMa Test Suite v1.0

literature [3, 6, 8]. Next, we summarize some of the analysis operations we willrefer through the rest of this technical report.

• Determining if a feature model is void. This operation takes afeature model as input and returns a value informing whether such featuremodel is void or not. A feature model is void if it represents no products.[2, 6, 7, 10, 14, 15, 16, 22, 25, 27, 30, 32, 33].

• Finding out if a product is valid. This operation checks whetheran input product (i.e. set of features) belongs to the set of productsrepresented by a given feature model or not. As an example, let us considerthe feature model of Figure 2.1 and the following product P={E-Shop,Catalogue, Info, Description, Security, Medium, GUI, PC}. Notice thatP is not a valid product of the product line represented by the modelbecause it does not include the mandatory feature ’Payment’. [2, 6, 7, 16,22, 25, 27, 30, 32].

• Obtaining all products. This operation takes a feature model as inputand returns all the products represented by the model. [2, 6, 7, 10, 14, 16,30].

• Calculating the number of products. This operation returns thenumber of products represented by a feature model. The model in Figure2.1 represents 1792 different products. [6, 7, 10, 14, 22].

• Calculating variability. This operation takes a feature model as inputand returns the ratio between the number of products and 2n − 1 wheren is the number of features in the model [6, 7]. This operation maybe used to measure the flexibility of the product line. For instance, asmall factor means that the number of combinations of features is verylimited compared to the total number of potential products. In Figure2.1, Variability = 0,00042.

• Calculating commonality. This operation takes a feature model anda feature as inputs and returns a value representing the proportion ofvalid products in which the feature appears [6, 7]. This operation maybe used to prioritize the order in which the features are to be developed[24] and can also be used to detect dead features [31]. In Figure 2.1,Commonality(Banners) = 25%.

• Dead features detection. This operation takes a feature model as inputand returns a set of dead features (if any). A dead feature is a feature thatnever appears in any of the products represented by the feature model[6, 31, 33]. These are caused by cross-tree constraints. Some commonscases of dead feature are presented in Figure 3.3 (Section 3.2.1).

Previous operations can be performed automatically using different approaches.Most translate feature models into specific logic paradigms such as proposi-tional logic [2, 16, 22, 30, 33] description logic [15, 32] or constraint program-ming [7, 31]. Others propose ad-hoc algorithms and solutions to perform these

7

Page 13: FaMa Test Suite v1.0

analyses [10, 14, 25]. Finally, previous analysis capabilities can be also foundin several available tools. Some examples of these are Feature Model Plug-in,Pure::Variants, AHEAD Tool Suite and FaMa Framework.

2.3 Black-box testing techniques

Software testing is performed by means of test cases. A test case is a set oftest inputs and expected outputs developed to verify compliance with a specificrequirement [11, 23]. Test cases may be grouped into so-called test suites. Avariety of testing techniques has been reported to assist on the design of effectivetest cases, i.e. those that will find more faults with less effort and time [5, 11, 23].In this paper, we will focus on the so-called black-box testing techniques. Thesetechniques focus on checking whether a program does what is supposed to dobase on its specification [23]. No knowledge about the internal structure orimplementation of the software under test is assumed. We will refer to four ofthese techniques along the technical report. Briefly, these are:

• Equivalence partitioning. This technique is used to reduce the numberof test cases to be developed while still maintaining a reasonable testcoverage [11, 23]. In this technique, the input domain of the programis divided into partitions (also called equivalence classes) in which theprogram is expected to process the set of data input in a similar (i.e.equivalent) way. According to this testing approach, only one or a fewtest cases of each partition are needed to evaluate the behaviour of theprogram for the corresponding partition. Thus, selecting a subset of testcases from each partition is enough to test the program effectively whilekeeping a manageable number of test cases.

• Boundary value analysis. This technique is used to guide the testerwhen selecting inputs from equivalence classes [11, 23] . According to thistechnique, programmers usually make mistakes with the inputs valueslocated on the boundaries of the equivalence classes. This method guidesthe tester to select those inputs located on the ’edges’ of the equivalencepartitions.

• Pairwise testing (also called 2-wise testing). This is a combinatorialsoftware testing method focusing on testing all possible discrete combi-nations of two input parameters [11, 17]. According to the hypothesisbehind this technique, most common errors involve one input parameter.The next most common category of errors consists of those dependent oninteractions between pairs of parameters and so on. Thus, the main goalof this technique is to address the second most common cases of errors(those involving two parameters) while keeping the number of test casesin a manageable level.

• Error guessing. This is a software testing technique based on the abilityof the tester to predict where faults are located according to its experience

8

Page 14: FaMa Test Suite v1.0

on the domain [11]. Using this technique, test cases are specifically de-signed to exercise typical error-prone points related to the type of systemunder test.

9

Page 15: FaMa Test Suite v1.0

Chapter 3

Test suite design

In this section, we describe how we designed the test cases that compose oursuite. These mainly are inputs-output combinations specifically created to revealfailures in the implementations of analysis operations on feature models.

Creating test cases for every possible permutation of a program is impracticaland very often impossible; there are simply too many input-output combina-tions [23]. Thus, as recalled by Pressman [26], the objective when testing isto ’design tests that have the highest likelihood of finding most errors with aminimum amount of time and effort’. To assist us in the process, we evaluateda number of techniques reported in the literature [11, 23]. We focused on black-box techniques since we want our test cases to rely on the specification of theanalysis operations rather than on specific implementations. In particular, wefound four of these techniques to be effective and generic enough to be applica-ble to our domain, namely: equivalence partitioning, boundary-value analysis,pairwise testing and error-guessing (see Section 2.3).

For the design of the suite we followed four steps, namely:

1. Identification of inputs and outputs.

2. Selection of representative instances of each type of input.

3. Combination of previous instances in those operations receiving more thanone input parameter.

4. Test cases report

Following, we detail how he carried out these steps.

3.1 Identification of inputs and outputs

The current version of our suite addresses seven analysis operations on featuremodels (detailed in Section 2.2). We selected these operations for its extendeduse in the community of automated analysis. Our suite is based on the formal

10

Page 16: FaMa Test Suite v1.0

specification of the analysis operations proposed by Benavides [6]. Table 3.1summarizes these operations in terms of their inputs and outputs. As illustrated,inputs are composed of feature models, products and features. Outputs mainlycomprise collections of products and features together with numeric and booleanvalues.

Operation Inputs OutputVoid FM FM Boolean valueValid product FM + Product Boolean valueProducts FM Collection of productsNumber of products FM Numeric valueVariability FM Numeric valueCommonality FM + Feature Numeric valueDead deatures FM Collection of features

Table 3.1: Analysis operations addressed in the suite

The feature model language supported in the current version of the suiteis based on FeatuRSEB [18, 27] also referred as basic feature models [8] (seeSection 2.1). We selected this notation for its simplicity and extended use incurrent feature model analysis tools and literature.

3.2 Inputs selection

In this section, we explain how we selected the inputs to be used in our testcases. For each type of input (i.e. feature models, products and features), wenext describe the techniques used and how we applied them.

3.2.1 Feature models

We found two testing techniques to be helpful for the selection of a suitable setof input feature models, namely: equivalence partitioning and error guessing.Next, we detail how we applied them:

Equivalence partitioning

The potential number of input feature models is limitless. To select a repre-sentative set of these, we propose dividing input feature models into equivalenceclasses according to the different types of relationships and constraints amongfeatures, i.e. mandatory, optional, or, alternative, requires and excludes. Weconsider this is a natural partition intuitively used in most proposals whendefining the mapping from a feature model to a specific logic paradigm (e.g.constraint satisfaction problem). [2, 9, 10, 14, 15, 16, 22, 25, 30, 32, 33]. There-fore, according to this technique, if a feature model with a single mandatoryrelationship is correctly managed by an operation, we could assume that thosewith more than one mandatory relationship would also be processed successfully.

11

Page 17: FaMa Test Suite v1.0

To keep equivalence classes in a manageable level, we propose dividing theinput domain into three groups of partitions as follows:

1. Feature models including a single type of relationship or constraint. Inputsfrom these partitions would help us to reveal failures when processing iso-lated relationships and constraints. For basic feature models, 6 partitionsare created: feature models including mandatory relationships, optional,or, alternative, requires and excludes. Figure 3.1 (a) depicts the inputswe selected from each equivalence class using this criterion. Note thatmodels with requires and excludes constraints also include an additionalrelationship (e.g. optional) to make them syntactically correct, i.e. sharea common parent feature.

2. Feature models combining two different types of relationships or constraints.We propose testing how analysis tools process feature model with differentrelationships by designing all the possible combinations of two of these,i.e. mandatory-optional, mandatory-or, etc. Following this criterion withbasic feature models, we created a second group of 13 partitions. Figure3.1 (b) presents the input models we took from each one of them.

Note that in contrast to conventional inputs such as numeric values, thepossible combinations of two feature relationships is not unique. Rather,this increase with the number of features exponentially. In general terms,feature relationships may appear in two main forms, child and siblingrelationships (see Figure 3.2). For our tests, we designed inputs includingboth of them. According to our experience, the analysis of these inputsusually result in more complex constraints and therefore in more effectivetest cases. For those input models including cross-tree constraints, wegave priority to simplicity and used sibling relationships exclusively.

3. Feature models including three or more different types of relationships orconstraints. We finally propose creating a last partition including all thosefeature models not included in previous equivalence classes. Note thatthis is a very generic partition that could be divided into smaller ones.However, we did not find any evidence that justify such an increment inthe number of test cases. Figure 3.1 (c) illustrates the input feature modelwe took from this partition.

As a result of the application of this technique we got 20 feature modelsrepresenting the whole input domain of basic feature models (see Figure 3.1).These were used as input in most of the analysis operations included in our suite.

Error guessing

Some common errors on feature models have been reported in the literature[6, 31]. As an example, Trinidad et al. [31] present some common problematicsituations in which dead features appear (Figure 3.3). Following the guidelinesof error guessing, we propose using these models as suitable inputs to check

12

Page 18: FaMa Test Suite v1.0

A

B

A

B

A

B

A

B C

A

B CC

A

B C

A

B

D E

C

F G

A

B C

A

B C

A

B C

A

B

D

A

B C

A

B C

A

B C

A

B C

a) Feature models with a single relationship or constraint c) Feature models including all different types of relationships and constraints

b) Feature models including two different types of relationships or constraints.

C

E

A

D E

H I

A

B

E F

C D

G

A

B

E F

C D

G

A

B

E F

C D

G

A

B

E F

C D

G

B C

F G

Fig

ure

3.1

:E

quiva

lence

partitio

ns

on

featu

rem

odels

13

Page 19: FaMa Test Suite v1.0

A

B

C

A

B

C

A

B C

A

B C

D E

Figure 3.2: Some possible combinations of mandatory and optional relationships

whether dead features are correctly detected by the tools under test (i.e. opera-tion for the detection of dead features). This way, we kept the number of inputmodels in a reasonable level while still having a fair confidence in the ability ofour tests to reveal failures.

A

B C

D E

A

B C

D E

A

B C

D E

A

B C

A

B C

A

B C

A

B

A

B C

Figure 3.3: Input feature models with dead features (grey features are dead)

3.2.2 Products

These are used in the operation to determine whether a given product is includedin those represented by a feature model (i.e. ’Valid product’ operation). Forthe selection of these inputs, we propose applying equivalence partitioning andboundary analysis as follows.

Firstly, we propose dividing input products into two equivalent partitions:valid and non-valid products. For each of these equivalence classes, inputsshould be treated in the same way by the program under test and should producethe same answer. Then, we propose using boundary analysis for the system-atic selection of input products from each partition. In particular, we suggestquantifying products according to the number of features they include. Then,we propose selecting those products with a minimum and maximum number offeatures to check the behavior of the operation on the ’edges’ of the partitions.

Figure 3.4 depicts an example of how we applied equivalence partitioning andboundary analysis for the selection of input products. For each input feature

14

Page 20: FaMa Test Suite v1.0

model, we propose selecting four inputs products, two valid and two non-valid,as follows:

• VP-1: valid product with the minimum number of features. This prod-uct could be helpful to determine whether not mandatory features areerroneously included in all products. For instance, a failure using VP-1={A,C} would suggest that any of the other features (i.e. B,D,E,F orG) are erroneously treated as mandatory ones.

• VP-2: valid product with the maximum number of features. This productcould allow us to reveal failures when processing feature relationships.For instance, we may use VP-2={A,B,D,F,G} to assess whether both, theoptional and alternative relationships, are correctly processed. Note thatVP-1 would still be helpful to detect whether non mandatory features areerroneously included in all products.

• NVP-1: non-valid product with the minimum number of features. Thisproduct would give us some confidence about the ability of our test todetect failures when processing set relationships. In the example, NVP-1={A} could help us to check whether the parent feature of an alternativerelationship is not included in any product without at least one of its childfeatures.

• NVP-2: non-valid product with the maximum number of features. Thisproduct would help us to reveal failures when processing non-valid prod-ucts with a maximum number of features. For instance, NVP-2={A,B,C,D,E,F,G} may reveal failures derived from including in a product more thanone alternative feature. Once again, note that we would still need NVP-1to make sure that A is not included in any product without at least oneof its child features, C or D.

VP-2={A,B,D,F,G}VP-1={A,C}

NVP-1={A}

Valid products

Non-valid products

NVP-2={A,B,C,D,E,F,G}

A

B

E F

C D

G

Figure 3.4: Input products selection using partition equivalence and boundaryanalysis

We identified two special causes making a product to be non-valid , namely:i) not including the root feature (e.g. product {C,B,E} for the model in Figure

15

Page 21: FaMa Test Suite v1.0

3.4), and ii) including non-existent features (e.g. product {A,C,H} for themodel in Figure 3.4). These causes are applicable to all products independentlyof the characteristics of the input feature model. Therefore, we did not considerthese situations for products NVP-1 and NVP-2. Instead of this, we checkedthese problems in two separated test cases to be as accurate as possible wheninforming about failures.

As a result of the application of these techniques, we got 68 test cases for theoperation ’Valid product’. As input feature models, we used those presented inprevious section created using equivalence partitioning.

3.2.3 Features

Given a feature model, commonality operation informs us about the percentageof products in which an input feature appears. For the selection of these inputfeatures, we propose applying equivalence partitioning and boundary analysisas follows.

To apply equivalence partitioning we suggest focusing on the result space ofthe commonality operation. More specifically, we propose considering a singleoutput partition: from 0% to 100% of commonality. Then, we propose applyingboundary analysis and selecting those input features returning a value situatedon the ’edges’ of the output partition. Figure 3.5 depicts an example of ourproposal. For each input feature model, two input features are used, one withminimum commonality (G=40%) and another one with maximum commonality(C=80%). We intentionally exclude the root feature whose commonality istrivial (100%). We also included an additional test case to check the behaviourof the operation when receiving a non-existent feature as input (e.g. feature Hfor model of Figure 3.5).

Using these techniques, we got 32 test cases for the operation commonality.As input feature models, we used those presented in previous section createdusing equivalence partitioning.

0%

Output partition

100%40% 80%

F-1=G F-2=C

A

B

E F

C D

G

Figure 3.5: Input features selection using partition equivalence and boundaryanalysis

3.3 Inputs combination

Combination strategies defines ways to select values for individual input param-eters and combine them to form complete test cases [17]. These are used when

16

Page 22: FaMa Test Suite v1.0

programs under test receive more than one input parameter. This is the caseof two of the operations included in our suite: ’Valid product’ (it receives a fea-ture model and a product) and ’Commonality’ (it receives a feature model anda feature). To create effective test cases for these operations we used a pairwisecombination strategy. That is, we created a test case for each different combi-nation of the two input parameters. Hence, our suite satisfies 2-wise coveragebeing 2 the maximum number of input received by the operations included onit.

Table 3.2 illustrates the number of inputs and resultant test cases for eachoperation using this strategy. We distinguish between the theoretical number oftest cases and the real one. The reason for this distinction is that not all inputproducts and features were applicable to the input feature models used in oursuite (e.g. feature models without non-valid products). As proposed by Grindalet al. [17], we worked out the theoretical number of test cases as the product ofthe number of different values of each parameter (i.e. 20*4=80 and 20*2=40).

OperationInputs

Test casesFMs Products Features (real/theoretical)

Valid product 20 4 - 68/80Commonality 20 - 2 32/40

Table 3.2: Number of test cases designed using pairwise testing.

3.4 Test cases report

To conclude the design of our suite, we organized the selected inputs and theirexpected outputs into test cases; 185 in total. For their specification, we fol-lowed the guidelines of the IEEE Standard for Software Testing Documentation[1]. As an example, Table 3.3 depicts three of the test cases included in thesuite. For each test case, an ID, description, inputs, expected outputs and in-tercase dependencies (if any) are presented. Intercase dependencies refer to theidentifiers of test cases that must be executed prior to a given test case. As anexample, test case ’P-9’ tries to reveals failures when obtaining the productsof feature models including mandatory and alternative relationships. Note thattest cases ’P-1’ (test of mandatory relationship in isolation) and ’P-4’ (test ofalternative relationship in isolation) should be executed beforehand. Test case’C-28’ exercises the interaction between requires and excludes constraints whencalculating commonality. Finally, test case ’VP-43’ is designed to reveal failureswhen checking whether an input product is included in those represented by afeature model including or- and alternative relationships. A complete list of thetest cases included in the suite is reported in Appendix A.

17

Page 23: FaMa Test Suite v1.0

ID Description Input Exp. Output Deps

P-9

Check whether the in-teraction between manda-tory and alternative rela-tionships is correctly pro-cessed.

A

B

E F

C D

G

{A,B,D,F},{A,B,D,E},

{A,B,C,F,G},{A,B,C,E,G}

P-1P-4

C-28

Check whether the inter-action between ’requires’and ’excludes’ constraintsis correctly processed. In-put feature has minimumcommonality.

A

B C

Feature=B

0%

C-2C-5C-6C-7

VP-43

Check whether valid prod-ucts (with a maximum setof features) are correctlyidentified in feature mod-els containing or- and al-ternative relationships.

A

D E

H I

B C

F G

P={A,B,D,E,F,G,H}

Valid

VP-5VP-6VP-7VP-8VP-9VP-10

Table 3.3: Three of the test cases included in the suite

18

Page 24: FaMa Test Suite v1.0

Chapter 4

Test suite evaluation and

refinement

4.1 Preliminary evaluation on the motivating sce-

nario

For a preliminary evaluation of the suite, we checked whether it was able todetect the fault introduced in the mock analysis tool of our motivating scenario(see Section 1.1). We implemented the test cases using JUnit1. The executiontook less than one minute. The suite revealed the fault when testing four outof seven analysis operations: ’Products’, ’Number of products’, ’Commonality’and ’Variability’. However, the defect still remained latent when testing theoperations ’Void FM’, ’Valid product’ and ’Dead features’. We found this faultsignificant enough to refine our suite with new test cases able to discover thedefect in these operations. Using error guessing, we created two new test casesand modified another one to deal with this type of fault.

As an example, Figure 4.1 depicts the input we designed to reveal the failurewhen testing the operation ’Void FM’. The model is void (i.e. it represent noproducts). However, our mock analysis tool erroneously takes the configuration{A,C,D,E} as a valid product, what is wrong since it includes two alternativefeatures (D,E) without their parent feature (B).

4.2 Evaluation using mutation testing

In order to measure the effectiveness of our proposal, we evaluated the abilityof our test cases to detect faults in the software under test (i.e. so-called fault-based adequacy criterion [34]). To that purpose, we applied mutation testingon three open source FM analysis tools integrated into the FaMa framework.

1http://www.junit.org/

19

Page 25: FaMa Test Suite v1.0

B

D E

A

C

Figure 4.1: Void feature model included in the suite

Mutation testing [13, 19, 21, 34] is a common fault-based testing techniquethat measures the effectiveness of test cases. Briefly, the method works as fol-lows. Firstly, simple faults are introduced in a program creating a collection offaulty versions, called mutants. The mutants are created from the original pro-gram by applying syntactic changes to its source code. Each syntactic change isdetermined by a so-called mutation operator. Test cases are then used to checkwhether the mutants and the original program product different responses. Ifa test cases distinguish the original program from a mutant we say the mutanthas been killed and the test case has proved to be effective at finding faults inthe program. Otherwise, the mutant remains alive. The percentage of killedmutants with respect to the total number of them provides an adequacy mea-surement of the test suite called mutation score.

The reasons that leaded us to select mutation testing as a suitable approachfor evaluating our proposal are threefold:

• Mutation techniques are independent of the nature of test inputs and thetesting approach used. This makes it easy to use them in the domain offeature models.

• A variety of tools to automate both the creation of mutants and theirexecution are available. Some examples are MuJava2, Jumble3 or Nester4.

• We had access to the FaMa framework as a good candidate to be mutated.FaMa is an open source framework integrating different tools for the auto-mated analysis of feature models. As leaders of the project, it was feasiblefor us to use it for the mutations.

Next, we describe the main steps we followed to evaluate the adequacy ofour test using mutation testing:

1. Selection of the programs to be mutated. We selected three of the fea-ture model analysis tools integrated into the FaMa Framework, namely:CSPReasoner (using constraint programming by means of Choco25 solver),

2http://www.ise.gmu.edu/ offutt/mujava/3http://jumble.sourceforge.net/4http://nester.sourceforge.net/5http://www.emn.fr/x-info/choco-solver/

20

Page 26: FaMa Test Suite v1.0

BDDReasoner (using binary decision diagrams by means of JavaBDD6

solver) and SATReasoner (using satisfiability problems by means of Sat4j7

solver). Each one of these reasoners uses a different paradigm to performthe analyses and was originally coded by different developers. Thus, weconsider these provide the required heterogeneity for the evaluation of oursuite.

2. We made sure the selected reasoners passed all the test cases of the testsuite.

3. Selection of a mutation tool. We chose MuClipse Eclipse plug-in8. Mu-Clipse is a java, easy-to-use and visual tool for mutation testing. It sup-ports a wide variety of operators and can be used for both generatingmutants and executing them.

4. Mutants generation. MuClipse supports two kind of mutation operators,traditional and class-level operators [21]. The former ones (15 operators)are applicable to most programming languages whereas the latter ones (28operators) are specifically design for OO languages. We mainly appliedtraditional mutants since our suite is not intended to detect OO specificfaults (e.g. removing a static declaration). Exceptionally, we found usefulto apply the class-level operators related with common OO programmingmistakes. According to our experience, these are the source of many prob-lems when using off-the-shelf OO solvers. The mutation process generated721 mutants (see Table 4.1) by applying 19 operators. For details aboutthe specific operators used in our evaluation we refer the reader to [21].

5. Mutants execution. We found some limitations in MuClipse and otherrelated tools that made it difficult to execute the mutants in a complexframework as FaMa. To overcome this problem, we designed an ad-hocJava application to execute the mutants in our framework. Briefly, thisworks as follows. Firstly, the program replaces the ’.class’ file of the classunder evaluation by the ’.class’ file of the current mutant. Then, it exe-cutes the test cases using JUnit and check whether any of the tests fails,i.e. kills the mutant. Following this process, we executed all the mutantsand obtained the results.

Figures 4.2, 4.3 and 4.4 summarize the results obtained when applying mu-tation testing on CSPReasoner, BDDReasoner and SATReasoner respectively.For each operation, the following information is presented: name of the opera-tion, lines of code of the classes implementing the operation, number of mutantscreated by each operator, number of test cases associated to the operation, totalnumber of mutants executed, total number of killed mutants and score (i.e. per-centage of killed mutants). Mutation operators not generating any mutant are

6http://javabdd.sourceforge.net/7http://www.sat4j.org/8http://muclipse.sourceforge.net/

21

Page 27: FaMa Test Suite v1.0

not presented in the tables. As illustrated, FAMA Test Suite identified 77,8% ofthe executed mutants on CSPReasoner, 81.6% on BDDReasoner and 75.6% onSATReasoner. Both, BDDReasoner and SATReasoner do not provide supportfor the detection of dead features.

Note that some reusable classes were used in multiple operations. As aresult, mutants derived from these classes were exercised more than once withdifferent test cases. This explains why the number of mutants executed is largerthan the number of mutants generated.

Figure 4.2: Adequacy evaluation using mutating testing on CSPReasoner

Figure 4.3: Adequacy evaluation using mutating testing on BDDReasoner

Figure 4.4: Adequacy evaluation using mutating testing on SATReasoner

Table 4.1 depicts the overall results of our evaluation. For each reasoner, thefollowing information is presented: name, total lines of code, number of mutants

22

Page 28: FaMa Test Suite v1.0

generated, number of mutants killed and mutation score. As illustrated, FaMaTest Suite identified 81,5% of the generated mutants. The rest of them (i.e.18.5%) were not distinguished by the test cases. We identified two main causesexplaining this, namely:

• Non-covered mutations. Reusable classes were mutated but not used byevery single operation. As a result of this, some portions of code wereoccasionally mutated but not exercised by the suite, and thus could notbe detected.

• Equivalent mutants. These are mutants that keeps the program’s seman-tics unchanged and thus cannot be detected by any test. Identifying equiv-alent mutants is a manual, time-consuming and in general undecidableproblem [34]. This is currently the major obstacle when applying muta-tion testing.

Estimating the percentages of equivalent and non-covered mutations is atime-consuming task out of the scope of this technical report. However, we stillconsider that the identification of 588 out of 721 mutants is a positive resultwhich shows the effectiveness of our proposal.

Reasoner LOC Mutants Killed ScoreChoco2Reasoner 2810 219 173 78.99%BDDReasoner 2625 267 231 86.5%SATReasoner 2824 235 184 78.29%Total 8259 721 588 81.55%

Table 4.1: Results of the adequacy evaluation using mutation testing

4.3 Discussion

The application of testing techniques is open to many interpretations. Eachone of these affects the balance between number of test cases and test cover-age. Good examples are equivalence partitions on feature models. Trying tobe exhaustive when creating these partitions may increase the number of testcases noticeable. For instance, consider creating a different partition for all thepossible combination of two, three, four, five and six different types of relation-ships and constraints on basic feature models, e.g. mandatory-optional-or. Asshowed in the formula below, this would lead to the creation of 63 different par-titions (well beyond the 20 proposed in this report). Using these partitions inour suite would increase the number of potential test cases to 643. In turn, thisnumber could easily exceed the thousand if we test child and sibling relationshipseparately, i.e. in different equivalence classes.

N=6∑

i=1

(

N

i

)

= 63

23

Page 29: FaMa Test Suite v1.0

To keep a reasonable balance between number of test cases and test cover-age, we kept in mind a number of generic recommendations from the testingliterature, namely: i) We gave priority to those decisions reducing the numberof test cases, ii) We avoided redundancies by designing each test case to reveala single type of fault, and iii) We designed simple test cases whose output couldbe worked out manually to avoid test themselves to become error-prone.

We remark that the potential users of the suite are every tool supportingthe analysis of feature model. This can be used by simply implementing thetest cases in the desired platform and executing them.

24

Page 30: FaMa Test Suite v1.0

Chapter 5

Test suite summary and

conclusions

5.1 Test suite summary

One of the goals of our suite is to be easily integrated into test plans and testdesign specifications following established guidelines as the IEEE Standard forSoftware Testing Documentation [1]. To facilitate this integration, we nextsummarize our test suite using the common terms described in this standard,namely:

• Test suite identifier. We refer to our test cases as the ’FaMa TestSuite’ since it has been designed as a part of the FaMa tool suite. Aversion naming scheme has also been used to keep record of the evolutionof the suite. Current version is 1.0.

• Features tested. Current version of the suite addresses 7 out of the 12analysis operations on basic feature models identified in the literature [6, 8](detailed in Section 2.2). The suite may be used to test any number ofthese independently, i.e. there not exist dependencies among operations.

• Testing techniques used. Four main testing techniques were usedto design the test cases, namely: i) Equivalence partitioning (EP), ii)Boundary-Value Analysis (BVA), iii) Pairwise Testing (PT), and iv) Error-Guessing (EG).

• Global input constraints. These specify input constraints that mustbe true for every input in the set of associated test cases. For the sakeof simplicity, two main input constraints are imposed, namely: i) Inputfeature models must be syntactically correct (e.g. checking models forconformance to a metamodel), and ii) All input parameters of the analysisoperations must be provided.

25

Page 31: FaMa Test Suite v1.0

• Feature pass/fail criteria. A feature (i.e. operation) is said to pass thetest when all the test cases associated to that feature are successful.

Table 5.1 summarizes the general aspects of the test suite. The number oftest cases associated to each operation and the testing techniques used on eachcase are also included.

Test suite identifier: FaMa Test Suite (v1.0)Features tested Test cases Techniques usedVoid FM 21 EP, PT, EGValid product 69 EP, PT, BVA, EGProducts 20 EP, PTNumber of products 20 EP, PTVariability 20 EP, PTCommonality 32 EP, PT, BVADead features 8 EGTotal 190Global input constraints:- Input FMs must be syntactically correct- All input parameters are requiredFeature pass criteria:- Pass 100% of associated test cases

Table 5.1: General overview of the FaMa Test Suite

A complete list of the test cases that compose the suite is reported in Ap-pendix A. Input models used in our test cases are also available in XML formatat the FaMa Tool Suite Web site1.

5.2 Conclusions

In this technical report, we present FaMa Test Suite, a set of implementation-independent test cases to validate the functionality of tools supporting the anal-ysis of feature models. Through the implementation of our test cases, faults canbe rapidly detected improving the reliability and quality of FM analysis tools.For its design, we used popular techniques from the software testing communityto help us on the creation of a representative set of input-output combinations.To evaluate its effectiveness, we applied mutation testing on three open sourcefeature model analysis tools integrated into the FaMa framework. These toolsuse different underlying paradigms and were coded by different developers whatprovides the necessary heterogeneity for the evaluation. We obtained a muta-tion score (including equivalent mutants) of 81,5% which shows the feasibilityof our proposal. As a result, we firmly believe this is a first step toward thedevelopment of a widely accepted test suite to support functional testing in thecommunity of automated analysis of feature models.

1http://www.isa.us.es/fama/?FaMa Test Suite

26

Page 32: FaMa Test Suite v1.0

Appendix A

Test cases

In this appendix, we present the test cases included in the current version ofthe FAMA Test Suite. For each operation, associated test cases are detailed.

A.1 Operation Void Feature Model

ID Description Input Exp. Output Deps.

VM-1

Check whether mandatoryrelationships are correctlymanaged by the opera-tion.

A

B

Non void

VM-2

Check whether optionalrelationships are correctlymanaged by the opera-tion.

A

B

Non void

VM-3

Check whether or–relationships are correctlymanaged by the opera-tion.

A

B C

Non void

27

Page 33: FaMa Test Suite v1.0

ID Description Input Exp. Output Deps.

VM-4

Check whether alternativerelationships are correctlymanaged by the opera-tion.

A

B C

Non void

VM-5

Check whether ’requires’constraints are correctlymanaged by the opera-tion.

A

B C

Non void VM-2

VM-6

Check whether ’excludes’constraints are correctlymanaged by the opera-tion.

A

B C

Non void VM-2

VM-7

Check whether the in-teraction between manda-tory and optional rela-tionships is correctly pro-cessed.

A

B

D

C

E

Non voidVM-1VM-2

VM-8

Check whether the in-teraction between manda-tory and or-relationshipsis correctly processed.

A

B

E F

C D

G

Non voidVM-1VM-3

VM-9

Check whether the in-teraction between manda-tory and alternative rela-tionships is correctly pro-cessed.

A

B

E F

C D

G

Non voidVM-1VM-4

28

Page 34: FaMa Test Suite v1.0

ID Description Input Exp. Output Deps.

VM-10

Check whether the in-teraction between manda-tory relationships and ’re-quires’ constraints is cor-rectly processed.

A

B C

Non voidVM-1VM-5

VM-11

Check whether the in-teraction between manda-tory relationships and ’ex-cludes’ constraints is cor-rectly processed.

A

B C

VoidVM-1VM-6

VM-12

Check whether the inter-action between optionaland or-relationships is cor-rectly processed.

A

B

E F

C D

G

Non voidVM-2VM-3

VM-13

Check whether the inter-action between optionaland alternative relation-ships is correctly pro-cessed.

A

B

E F

C D

G

Non voidVM-2VM-4

VM-14

Check whether the inter-action between or- and al-ternative relationships iscorrectly processed.

A

D E

H I

B C

F G

Non voidVM-3VM-4

VM-15

Check whether theinteraction betweenor-relationships and ’re-quires’ constraints iscorrectly processed.

A

B C

Non voidVM-3VM-5

29

Page 35: FaMa Test Suite v1.0

ID Description Input Exp. Output Deps.

VM-16

Check whether theinteraction betweenor-relationships and ’ex-cludes’ constraints iscorrectly processed.

A

B CNon void

VM-3VM-6

VM-17

Check whether the in-teraction between alterna-tive relationships and ’re-quires’ constraints is cor-rectly processed.

A

B C

Non voidVM-4VM-5

VM-18

Check whether the in-teraction between alterna-tive relationships and ’ex-cludes’ constraints is cor-rectly processed.

A

B CNon void

VM-4VM-6

VM-19

Check whether the inter-action between ’requires’and ’excludes’ constraintsis correctly processed.

A

B C

Non voidVM-2VM-5VM-6

VM-20

Check whether the in-teraction among threeor more different typesof relationships andconstraints is correctlyprocessed.

A

B

D E

C

F G

Non voidVM-1,...,VM-19

30

Page 36: FaMa Test Suite v1.0

ID Description Input Exp. Output Deps.

VM-21

Check whether child fea-tures in an alternative re-lationship can be erro-neously selected withoutincluding their parent fea-ture

B

D E

A

CVoid

VM-2VM-4

Table A.1: Operation VoidFM. Test cases

A.2 Operation Valid Product

ID Description Input Exp. Output Deps.

VP-1

Check whether valid prod-ucts are correctly identi-fied in feature models withmandatory relationships.

A

B

P={A,B}

Valid

VP-2

Check whether non-validproducts are correctlyidentified in featuremodels with mandatoryrelationships.

A

B

P={A}

Non-valid

VP-3

Check whether valid prod-ucts (with a minimum setof features) are correctlyidentified in feature mod-els with optional relation-ships.

A

B

P={A}

Valid

31

Page 37: FaMa Test Suite v1.0

ID Description Input Exp. Output Deps.

VP-4

Check whether valid prod-ucts (with a maximum setof features) are correctlyidentified in feature mod-els with optional relation-ships.

A

B

P={A,B}

Valid

VP-5

Check whether valid prod-ucts (with a minimum setof features) are correctlyidentified in feature mod-els with or-relationships.

A

B C

P={A,B}

Valid

VP-6

Check whether valid prod-ucts (with a maximum setof features) are correctlyidentified in feature mod-els with or-relationships.

A

B C

P={A,B,C}

Valid

VP-7

Check whether non-validproducts are correctlyidentified in feature mod-els with or-relationships.

A

B C

P={A}

Non-valid

VP-8

Check whether valid prod-ucts are correctly identi-fied in feature models withalternative relationships.

A

B C

P={A,B}

Valid

32

Page 38: FaMa Test Suite v1.0

ID Description Input Exp. Output Deps.

VP-9

Check whether non-validproducts (with a mini-mum set of features) arecorrectly identified in fea-ture models with alterna-tive relationships.

A

B C

P={A}

Non-valid

VP-10

Check whether non-validproducts (with a maxi-mum set of features) arecorrectly identified in fea-ture models with alterna-tive relationships.

A

B C

P={A,B,C}

Non-valid

VP-11

Check whether valid prod-ucts (with a minimum setof features) are correctlyidentified in feature mod-els with ’requires’ con-straints.

A

B C

P={A}

ValidVP-3VP-4

VP-12

Check whether valid prod-ucts (with a maximum setof features) are correctlyidentified in feature mod-els with ’requires’ con-straints.

A

B C

P={A,B,C}

ValidVP-3VP-4

VP-13

Check whether non-validproducts are correctlyidentified in featuremodels with ’requires’constraints.

A

B C

P={A,B}

Non-validVP-3VP-4

33

Page 39: FaMa Test Suite v1.0

ID Description Input Exp. Output Deps.

VP-14

Check whether valid prod-ucts (with a minimum setof features) are correctlyidentified in feature mod-els with ’excludes’ con-straints.

A

B C

P={A}

ValidVP-3VP-4

VP-15

Check whether valid prod-ucts (with a maximum setof features) are correctlyidentified in feature mod-els with ’excludes’ con-straints.

A

B C

P={A,C}

ValidVP-3VP-4

VP-16

Check whether non-validproducts are correctlyidentified in featuremodels with ’excludes’constraints.

A

B C

P={A,B,C}

Non-validVP-3VP-4

VP-17

Check whether valid prod-ucts (with a minimumset of features) are cor-rectly identified in featuremodels containing manda-tory and optional relation-ships.

A

B

D

C

E

P={A,B}

Valid

VP-1VP-2VP-3VP-4

34

Page 40: FaMa Test Suite v1.0

ID Description Input Exp. Output Deps.

VP-18

Check whether valid prod-ucts (with a maximumset of features) are cor-rectly identified in featuremodels containing manda-tory and optional relation-ships.

A

B

D

C

E

P={A,B,C,D,E}

Valid

VP-1VP-2VP-3VP-4

VP-19

Check whether non-validproducts (with a mini-mum set of features) arecorrectly identified in fea-ture models containingmandatory and optionalrelationships.

A

B

D

C

E

P={A}

Non-valid

VP-1VP-2VP-3VP-4

VP-20

Check whether non-validproducts (with a maxi-mum set of features) arecorrectly identified in fea-ture models containingmandatory and optionalrelationships.

A

B

D

C

E

P={A,B,C,D}

Non-valid

VP-1VP-2VP-3VP-4

VP-21

Check whether valid prod-ucts (with a minimum setof features) are correctlyidentified in feature mod-els containing mandatoryand or-relationships.

A

B

E F

C D

G

P={A,B,C,F}

Valid

VP-1VP-2VP-5VP-6VP-7

35

Page 41: FaMa Test Suite v1.0

ID Description Input Exp. Output Deps.

VP-22

Check whether valid prod-ucts (with a maximum setof features) are correctlyidentified in feature mod-els containing mandatoryand or-relationships.

A

B

E F

C D

G

P={A,B,C,D,E,F,G}

Valid

VP-1VP-2VP-5VP-6VP-7

VP-23

Check whether non-validproducts (with a min-imum set of features)are correctly identified infeature models contain-ing mandatory and or-relationships.

A

B

E F

C D

G

P={A}

Non-valid

VP-1VP-2VP-5VP-6VP-7

VP-24

Check whether non-validproducts (with a max-imum set of features)are correctly identified infeature models contain-ing mandatory and or-relationships.

A

B

E F

C D

G

P={A,B,C,D,G}

Non-valid

VP-1VP-2VP-5VP-6VP-7

VP-25

Check whether valid prod-ucts (with a minimum setof features) are correctlyidentified in feature mod-els containing mandatoryand alternative relation-ships.

A

B

E F

C D

G

P={A,B,D,E}

Valid

VP-1VP-2VP-8VP-9VP-10

36

Page 42: FaMa Test Suite v1.0

ID Description Input Exp. Output Deps.

VP-26

Check whether valid prod-ucts (with a maximum setof features) are correctlyidentified in feature mod-els containing mandatoryand alternative relation-ships.

A

B

E F

C D

G

P={A,B,C,E,G}

Valid

VP-1VP-2VP-8VP-9VP-10

VP-27

Check whether non-validproducts (with a mini-mum set of features) arecorrectly identified in fea-ture models containingmandatory and alterna-tive relationships.

A

B

E F

C D

G

P={A}

Non-valid

VP-1VP-2VP-8VP-9VP-10

VP-28

Check whether non-validproducts (with a maxi-mum set of features) arecorrectly identified in fea-ture models containingmandatory and alterna-tive relationships.

A

B

E F

C D

G

P={A,B,C,D,E,F,G}

Non-valid

VP-1VP-2VP-8VP-9VP-10

VP-29

Check whether valid prod-ucts are correctly iden-tified in feature modelscontaining mandatory re-lationships and ’requires’constraints.

A

B C

P={A,B,C}

Valid

VP-1VP-2VP-11VP-12VP-13

37

Page 43: FaMa Test Suite v1.0

ID Description Input Exp. Output Deps.

VP-30

Check whether non-validproducts (with a min-imum set of features)are correctly identified infeature models contain-ing mandatory relation-ships and ’requires’ con-straints.

A

B C

P={A}

Non-valid

VP-1VP-2VP-11VP-12VP-13

VP-31

Check whether non-validproducts (with a max-imum set of features)are correctly identified infeature models contain-ing mandatory relation-ships and ’requires’ con-straints.

A

B C

P={A,B}

Non-valid

VP-1VP-2VP-11VP-12VP-13

VP-32

Check whether non-validproducts (with a min-imum set of features)are correctly identified infeature models contain-ing mandatory relation-ships and ’excludes’ con-straints.

A

B C

P={A}

Non-valid

VP-1VP-2VP-14VP-15VP-16

VP-33

Check whether non-validproducts (with a max-imum set of features)are correctly identified infeature models contain-ing mandatory relation-ships and ’excludes’ con-straints.

A

B C

P={A,B,C}

Non-valid

VP-1VP-2VP-14VP-15VP-16

38

Page 44: FaMa Test Suite v1.0

ID Description Input Exp. Output Deps.

VP-34

Check whether valid prod-ucts (with a minimum setof features) are correctlyidentified in feature mod-els containing optionaland or-relationships.

A

B

E F

C D

G

P={A,C}

Valid

VP-3VP-4VP-5VP-6VP-7

VP-35

Check whether valid prod-ucts (with a maximum setof features) are correctlyidentified in feature mod-els containing optionaland or-relationships.

A

B

E F

C D

G

P={A,B,C,D,E,F,G}

Valid

VP-3VP-4VP-5VP-6VP-7

VP-36

Check whether non-validproducts (with a min-imum set of features)are correctly identifiedin feature models con-taining optional andor-relationships.

A

B

E F

C D

G

P={A}

Non-valid

VP-3VP-4VP-5VP-6VP-7

VP-37

Check whether non-validproducts (with a max-imum set of features)are correctly identifiedin feature models con-taining optional andor-relationships.

A

B

E F

C D

G

P={A,B,C,D,G}

Non-valid

VP-3VP-4VP-5VP-6VP-7

39

Page 45: FaMa Test Suite v1.0

ID Description Input Exp. Output Deps.

VP-38

Check whether valid prod-ucts (with a minimumset of features) are cor-rectly identified in fea-ture models containingoptional and alternativerelationships.

A

B

E F

C D

G

P={A,C}

Valid

VP-3VP-4VP-8VP-9VP-10

VP-39

Check whether valid prod-ucts (with a maximumset of features) are cor-rectly identified in fea-ture models containingoptional and alternativerelationships.

A

B

E F

C D

G

P={A,B,D,F,G}

Valid

VP-3VP-4VP-8VP-9VP-10

VP-40

Check whether non-validproducts (with a mini-mum set of features) arecorrectly identified in fea-ture models containingoptional and alternativerelationships.

A

B

E F

C D

G

P={A}

Non-valid

VP-3VP-4VP-8VP-9VP-10

VP-41

Check whether non-validproducts (with a maxi-mum set of features) arecorrectly identified in fea-ture models containingoptional and alternativerelationships.

A

B

E F

C D

G

P={A,B,C,D,E,F,G}

Non-valid

VP-3VP-4VP-8VP-9VP-10

40

Page 46: FaMa Test Suite v1.0

ID Description Input Exp. Output Deps.

VP-42

Check whether valid prod-ucts (with a minimum setof features) are correctlyidentified in feature mod-els containing or- and al-ternative relationships.

A

D E

H I

B C

F G

P={A,C,D}

Valid

VP-5VP-6VP-7VP-8VP-9VP-10

VP-43

Check whether valid prod-ucts (with a maximum setof features) are correctlyidentified in feature mod-els containing or- and al-ternative relationships.

A

D E

H I

B C

F G

P={A,B,D,E,F,G,H}

Valid

VP-5VP-6VP-7VP-8VP-9VP-10

VP-44

Check whether non-validproducts (with a mini-mum set of features) arecorrectly identified in fea-ture models containing or-and alternative relation-ships.

A

D E

H I

B C

F G

P={A}

Non-valid

VP-5VP-6VP-7VP-8VP-9VP-10

VP-45

Check whether non-validproducts (with a maxi-mum set of features) arecorrectly identified in fea-ture models containing or-and alternative relation-ships.

A

D E

H I

B C

F G

P={A,B,C,D,E,F,G,H,I}

Non-valid

VP-5VP-6VP-7VP-8VP-9VP-10

41

Page 47: FaMa Test Suite v1.0

ID Description Input Exp. Output Deps.

VP-46

Check whether valid prod-ucts (with a minimumset of features) are cor-rectly identified in fea-ture models containingor-relationships and ’re-quires’ constraints.

A

B C

P={A,C}

Valid

VP-5VP-6VP-7VP-11VP-12VP-13

VP-47

Check whether valid prod-ucts (with a maximumset of features) are cor-rectly identified in fea-ture models containingor-relationships and ’re-quires’ constraints.

A

B C

P={A,B,C}

Valid

VP-5VP-6VP-7VP-11VP-12VP-13

VP-48

Check whether non-validproducts (with a mini-mum set of features) arecorrectly identified in fea-ture models containingor-relationships and ’re-quires’ constraints.

A

B C

P={A}

Non-valid

VP-5VP-6VP-7VP-11VP-12VP-13

VP-49

Check whether non-validproducts (with a maxi-mum set of features) arecorrectly identified in fea-ture models containingor-relationships and ’re-quires’ constraints.

A

B C

P={A,B}

Non-valid

VP-5VP-6VP-7VP-11VP-12VP-13

42

Page 48: FaMa Test Suite v1.0

ID Description Input Exp. Output Deps.

VP-50

Check whether validproducts are correctlyidentified in featuremodels containing or-relationships and ’ex-cludes’ constraints.

A

B C

P={A,B}

Valid

VP-5VP-6VP-7VP-14VP-15VP-16

VP-51

Check whether non-validproducts (with a mini-mum set of features) arecorrectly identified in fea-ture models containingor-relationships and ’ex-cludes’ constraints.

A

B C

P={A}

Non-valid

VP-5VP-6VP-7VP-14VP-15VP-16

VP-52

Check whether non-validproducts (with a maxi-mum set of features) arecorrectly identified in fea-ture models containingor-relationships and ’ex-cludes’ constraints.

A

B C

P={A,B,C}

Non-valid

VP-5VP-6VP-7VP-14VP-15VP-16

VP-53

Check whether valid prod-ucts are correctly iden-tified in feature modelscontaining alternative re-lationships and ’requires’constraints.

A

B C

P={A,C}

Valid

VP-8VP-9VP-10VP-11VP-12VP-13

43

Page 49: FaMa Test Suite v1.0

ID Description Input Exp. Output Deps.

VP-54

Check whether non-validproducts (with a min-imum set of features)are correctly identified infeature models contain-ing alternative relation-ships and ’requires’ con-straints.

A

B C

P={A}

Non-valid

VP-8VP-9VP-10VP-11VP-12VP-13

VP-55

Check whether non-validproducts (with a max-imum set of features)are correctly identified infeature models contain-ing alternative relation-ships and ’requires’ con-straints.

A

B C

P={A,B,C}

Non-valid

VP-8VP-9VP-10VP-11VP-12VP-13

VP-56

Check whether valid prod-ucts are correctly iden-tified in feature modelscontaining alternative re-lationships and ’excludes’constraints.

A

B C

P={A,B}

Valid

VP-8VP-9VP-10VP-14VP-15VP-16

VP-57

Check whether non-validproducts (with a min-imum set of features)are correctly identified infeature models contain-ing alternative relation-ships and ’excludes’ con-straints.

A

B C

P={A}

Non-valid

VP-8VP-9VP-10VP-14VP-15VP-16

44

Page 50: FaMa Test Suite v1.0

ID Description Input Exp. Output Deps.

VP-58

Check whether non-validproducts (with a max-imum set of features)are correctly identified infeature models contain-ing alternative relation-ships and ’excludes’ con-straints.

A

B C

P={A,B,C}

Non-valid

VP-8VP-9VP-10VP-14VP-15VP-16

VP-59

Check whether valid prod-ucts (with a minimum setof features) are correctlyidentified in feature mod-els containing ’requires’and ’excludes’ constraints.

A

B C

P={A}

Valid

VP-3VP-4VP-11VP-12VP-13VP-14VP-15VP-16

VP-60

Check whether valid prod-ucts (with a maximum setof features) are correctlyidentified in feature mod-els containing ’requires’and ’excludes’ constraints.

A

B C

P={A,C}

Valid

VP-3VP-4VP-11VP-12VP-13VP-14VP-15VP-16

VP-61

Check whether non-validproducts (with a mini-mum set of features) arecorrectly identified in fea-ture models containing’requires’ and ’excludes’constraints.

A

B C

P={A,B}

Non-valid

VP-3VP-4VP-11VP-12VP-13VP-14VP-15VP-16

45

Page 51: FaMa Test Suite v1.0

ID Description Input Exp. Output Deps.

VP-62

Check whether non-validproducts (with a maxi-mum set of features) arecorrectly identified in fea-ture models containing’requires’ and ’excludes’constraints.

A

B C

P={A,B,C}

Non-valid

VP-3VP-4VP-11VP-12VP-13VP-14VP-15VP-16

VP-63

Check whether valid prod-ucts (with a minimumset of features) are cor-rectly identified in featuremodels containing threeor more different typesof relationships and con-straints.

A

B

D E

C

F G

P={A,B,D}

ValidVP-1,...,VP-59

VP-64

Check whether valid prod-ucts (with a maximumset of features) are cor-rectly identified in featuremodels containing threeor more different typesof relationships and con-straints.

A

B

D E

C

F G

P={A,B,C,E,F,G}

ValidVP-1,...,VP-59

VP-65

Check whether non-validproducts (with a mini-mum set of features) arecorrectly identified in fea-ture models containingthree or more differenttypes of relationships andconstraints.

A

B

D E

C

F G

P={A}

Non-validVP-1,...,VP-59

46

Page 52: FaMa Test Suite v1.0

ID Description Input Exp. Output Deps.

VP-66

Check whether non-validproducts (with a maxi-mum set of features) arecorrectly identified in fea-ture models containingthree or more differenttypes of relationships andconstraints.

A

B

D E

C

F G

P={A,B,C,D,E,F,G}

Non-validVP-1,...,VP-59

VP-67

Check whether non-validproducts excluding theroot feature are correctlyidentified.

A

B

P={B}

Non-validVP-3VP-4

VP-68

Check whether non-validproducts including non-existent features are cor-rectly processed by the op-eration.

A

B

P={A,H}

Non-validVP-3VP-4

VP-69

Check whether childfeatures in an alterna-tive relationships canerroneously be part ofa product without theirparent feature.

A

B

C D

P={A,C,D}

Non Valid

VP-3VP-4VP-8VP-9VP-10

Table A.2: Operation ValidProduct. Test cases

47

Page 53: FaMa Test Suite v1.0

A.3 Operation All Products

ID Description Input Exp. Output Deps.

P-1

Check whether mandatoryrelationships are correctlymanaged by the opera-tion.

A

B

{A,B}

P-2

Check whether optionalrelationships are correctlymanaged by the opera-tion.

A

B

{A}, {A,B}

P-3

Check whether or–relationships are correctlymanaged by the opera-tion.

A

B C

{A,B}, {A,C},{A,B.C}

P-4

Check whether alternativerelationships are correctlymanaged by the opera-tion.

A

B C

{A,B}, {A,C}

P-5

Check whether ’requires’constraints are correctlymanaged by the opera-tion.

A

B C

{A}, {A,C},{A,B,C}

P-2

P-6

Check whether ’excludes’constraints are correctlymanaged by the opera-tion.

A

B C

{A}, {A,B},{A,C}

P-2

48

Page 54: FaMa Test Suite v1.0

ID Description Input Exp. Output Deps.

P-7

Check whether the in-teraction between manda-tory and optional rela-tionships is correctly pro-cessed.

A

B

D

C

E

{A,B}, {A,B,D},{A,B,C,E},{A,B,C,D,E}

P-1P-2

P-8

Check whether the in-teraction between manda-tory and or-relationshipsis correctly processed.

A

B

E F

C D

G

{A,B,C,E},{A,B,C,F},{A,B,C,E,F},{A,B,D,E,G},{A,B,C,D,E,G},{A,B,D,F,G},{A,B,C,D,F,G},{A,B,D,E,F,G},{A,B,C,D,E,F,G}

P-1P-3

P-9

Check whether the in-teraction between manda-tory and alternative rela-tionships is correctly pro-cessed.

A

B

E F

C D

G

{A,B,D,F},{A,B,D,E},{A,B,C,F,G},{A,B,C,E,G}

P-1P-4

P-10

Check whether the in-teraction between manda-tory relationships and ’re-quires’ constraints is cor-rectly processed.

A

B C{A,B,C}

P-1P-5

P-11

Check whether the in-teraction between manda-tory relationships and ’ex-cludes’ constraints is cor-rectly processed.

A

B CNone

P-1P-6

49

Page 55: FaMa Test Suite v1.0

ID Description Input Exp. Output Deps.

P-12

Check whether the inter-action between optionaland or-relationships is cor-rectly processed.

A

B

E F

C D

G

{A,D}, {A,C},{A,C,D},{A,C,G},{A,B,D,F},{A,B,D,E},{A,C,D,G},{A,B,C,F},{A,B,C,E},{A,B,D,E,F},{A,B,C,F,G},{A,B,C,E,G},{A,B,C,D,F},{A,B,C,D,E},{A,B,C,F,E},

{A,B,C,D,F,G},{A,B,C,D,E,G},{A,B,C,E,F,G},{A,B,C,D,E,F},{A,B,C,D,E,F,G}

P-2P-3

P-13

Check whether the inter-action between optionaland alternative relation-ships is correctly pro-cessed.

A

B

E F

C D

G

{A,C}, {A,D},{A,D,G},{A,B,C,E},{A,B,C,F},{A,B,D,E},{A,B,D,F},

{A,B,D,E,G},{A,B,D,F,G}

P-2P-4

50

Page 56: FaMa Test Suite v1.0

ID Description Input Exp. Output Deps.

P-14

Check whether the inter-action between or- and al-ternative relationships iscorrectly processed.

A

D E

H I

B C

F G

{A,D,C},{A,C,E,H},{A,C,E,I},{A,B,D,G},{A,B,D,F},

{A,C,D,E,H},{A,C,D,E,I},{A,B,D,F,G},{A,B,E,G,H},{A,B,E,F,H},{A,B,E,G,I},{A,B,E,F,I},

{A,B,D,E,G,H},{A,B,E,F,G,H},{A,B,D,E,F,H},{A,B,D,E,G,I},{A,B,E,F,G,I},{A,B,D,E,F,I},{A,B,D,E,F,G,H},{A,B,D,E,F,G,I}

P-3P-4

P-15

Check whether theinteraction betweenor-relationships and ’re-quires’ constraints iscorrectly processed.

A

B C{A,C}, {A,B,C}

P-3P-5

P-16

Check whether theinteraction betweenor-relationships and ’ex-cludes’ constraints iscorrectly processed.

A

B C{A,B}, {A,C}

P-3P-6

P-17

Check whether the in-teraction between alterna-tive relationships and ’re-quires’ constraints is cor-rectly processed.

A

B C{A,C}

P-4P-5

51

Page 57: FaMa Test Suite v1.0

ID Description Input Exp. Output Deps.

P-18

Check whether the in-teraction between alterna-tive relationships and ’ex-cludes’ constraints is cor-rectly processed.

A

B C{A,B}, {A,C}

P-4P-6

P-19

Check whether the inter-action between ’requires’and ’excludes’ constraintsis correctly processed.

A

B C{A}, {A,C}

P-2P-5P-6

P-20

Check whether the in-teraction among threeor more different typesof relationships andconstraints is correctlyprocessed.

A

B

D E

C

F G

{A,B,D},{A,B,C,D,F},{A,B,C,E,F},{A,B,C,E,F,G}

P-1,...,P-19

Table A.3: Operation AllProducts. Test cases

A.4 Operation Number of Products

ID Description Input Exp. Output Deps.

NP-1

Check whether mandatoryrelationships are correctlymanaged by the opera-tion.

A

B

1

NP-2

Check whether optionalrelationships are correctlymanaged by the opera-tion.

A

B

2

52

Page 58: FaMa Test Suite v1.0

ID Description Input Exp. Output Deps.

NP-3

Check whether or–relationships are correctlymanaged by the opera-tion.

A

B C

3

NP-4

Check whether alternativerelationships are correctlymanaged by the opera-tion.

A

B C

2

NP-5

Check whether ’requires’constraints are correctlymanaged by the opera-tion.

A

B C

3 NP-2

NP-6

Check whether ’excludes’constraints are correctlymanaged by the opera-tion.

A

B C

3 NP-2

NP-7

Check whether the in-teraction between manda-tory and optional rela-tionships is correctly pro-cessed.

A

B

D

C

E

4NP-1NP-2

NP-8

Check whether the in-teraction between manda-tory and or-relationshipsis correctly processed.

A

B

E F

C D

G

9NP-1NP-3

53

Page 59: FaMa Test Suite v1.0

ID Description Input Exp. Output Deps.

NP-9

Check whether the in-teraction between manda-tory and alternative rela-tionships is correctly pro-cessed.

A

B

E F

C D

G

4NP-1NP-4

NP-10

Check whether the in-teraction between manda-tory relationships and ’re-quires’ constraints is cor-rectly processed.

A

B C

1NP-1NP-5

NP-11

Check whether the in-teraction between manda-tory relationships and ’ex-cludes’ constraints is cor-rectly processed.

A

B C

0NP-1,NP-6

NP-12

Check whether the inter-action between optionaland or-relationships is cor-rectly processed.

A

B

E F

C D

G

20NP-2NP-3

NP-13

Check whether the inter-action between optionaland alternative relation-ships is correctly pro-cessed.

A

B

E F

C D

G

9NP-2NP-4

54

Page 60: FaMa Test Suite v1.0

ID Description Input Exp. Output Deps.

NP-14

Check whether the inter-action between or- and al-ternative relationships iscorrectly processed.

A

D E

H I

B C

F G

20NP-3NP-4

NP-15

Check whether theinteraction betweenor-relationships and ’re-quires’ constraints iscorrectly processed.

A

B C

2NP-3NP-5

NP-16

Check whether theinteraction betweenor-relationships and ’ex-cludes’ constraints iscorrectly processed.

A

B C2

NP-3NP-6

NP-17

Check whether the in-teraction between alterna-tive relationships and ’re-quires’ constraints is cor-rectly processed.

A

B C

1NP-4NP-5

NP-18

Check whether the in-teraction between alterna-tive relationships and ’ex-cludes’ constraints is cor-rectly processed.

A

B C2

NP-4NP-6

NP-19

Check whether the inter-action between ’requires’and ’excludes’ constraintsis correctly processed.

A

B C

2NP-2NP-5NP-6

55

Page 61: FaMa Test Suite v1.0

ID Description Input Exp. Output Deps.

NP-20

Check whether the in-teraction among threeor more different typesof relationships andconstraints is correctlyprocessed.

A

B

D E

C

F G

4NP-1,...,NP-19

Table A.4: Operation Number of Products. Test cases

A.5 Operation Variability

ID Description Input Exp. Output Deps.

V-1

Check whether mandatoryrelationships are correctlymanaged by the opera-tion.

A

B

0,33

V-2

Check whether optionalrelationships are correctlymanaged by the opera-tion.

A

B

0,66

V-3

Check whether or–relationships are correctlymanaged by the opera-tion.

A

B C

0,42

V-4

Check whether alternativerelationships are correctlymanaged by the opera-tion.

A

B C

0,28

56

Page 62: FaMa Test Suite v1.0

ID Description Input Exp. Output Deps.

V-5

Check whether ’requires’constraints are correctlymanaged by the opera-tion.

A

B C

0,42 V-2

V-6

Check whether ’excludes’constraints are correctlymanaged by the opera-tion.

A

B C

0,42 V-2

V-7

Check whether the in-teraction between manda-tory and optional rela-tionships is correctly pro-cessed.

A

B

D

C

E

0.12V-1V-2

V-8

Check whether the in-teraction between manda-tory and or-relationshipsis correctly processed.

A

B

E F

C D

G

0.07V-1V-3

V-9

Check whether the in-teraction between manda-tory and alternative rela-tionships is correctly pro-cessed.

A

B

E F

C D

G

0,03V-1V-4

V-10

Check whether the in-teraction between manda-tory and ’requires’ con-straints is correctly pro-cessed.

A

B C

0,14V-1V-5

57

Page 63: FaMa Test Suite v1.0

ID Description Input Exp. Output Deps.

V-11

Check whether the in-teraction between manda-tory and ’excludes’ con-straints is correctly pro-cessed.

A

B C

0V-1V-6

V-12

Check whether the inter-action between optionaland or-relationships is cor-rectly processed.

A

B

E F

C D

G

0,15V-2V-3

V-13

Check whether the inter-action between optionaland alternative relation-ships is correctly pro-cessed.

A

B

E F

C D

G

0,07V-2V-4

V-14

Check whether the inter-action between or- and al-ternative relationships iscorrectly processed.

A

D E

H I

B C

F G

0,03V-3V-4

V-15

Check whether theinteraction betweenor-relationships and ’re-quires’ constraints iscorrectly processed.

A

B C

0,28V-3V-5

V-16

Check whether theinteraction betweenor-relationships and ’ex-cludes’ constraints iscorrectly processed.

A

B C0,28

V-3V-6

58

Page 64: FaMa Test Suite v1.0

ID Description Input Exp. Output Deps.

V-17

Check whether the in-teraction between alterna-tive relationships and ’re-quires’ constraints is cor-rectly processed.

A

B C

0,14V-4V-5

V-18

Check whether the in-teraction between alterna-tive relationships and ’ex-cludes’ constraints is cor-rectly processed.

A

B C0,28

V-4V-6

V-19

Check whether the inter-action between ’requires’and ’excludes’ constraintsis correctly processed.

A

B C

0,28V-2V-5V-6

V-20

Check whether the in-teraction among threeor more different typesof relationships andconstraints is correctlyprocessed.

A

B

D E

C

F G

0,03V-1,...,V-19

Table A.5: Operation Variability. Test cases

A.6 Operation Commonality

59

Page 65: FaMa Test Suite v1.0

ID Description Input Exp. Output Deps.

C-1

Check whether mandatoryrelationships are correctlymanaged by the opera-tion.

A

B

Feature=B

100%

C-2

Check whether optionalrelationships are correctlymanaged by the opera-tion.

A

B

Feature=B

50%

C-3

Check whether or–relationships are correctlymanaged by the opera-tion.

A

B C

Feature=B

66%

C-4

Check whether alternativerelationships are correctlymanaged by the opera-tion.

A

B C

Feature=B

50%

C-5

Check whether ’requires’constraints are correctlymanaged by the opera-tion. Input feature hasminimum commonality.

A

B C

Feature=B

33% C-2

60

Page 66: FaMa Test Suite v1.0

ID Description Input Exp. Output Deps.

C-6

Check whether ’requires’constraints are correctlymanaged by the opera-tion. Input feature hasmaximum commonality.

A

B C

Feature=C

66% C-2

C-7

Check whether ’excludes’constraints are correctlymanaged by the opera-tion.

A

B C

Feature=B

33% C-2

C-8

Check whether the in-teraction between manda-tory and optional rela-tionships is correctly pro-cessed. Input feature hasminimum commonality.

A

B

D

C

E

Feature=E

50%C-1C-2

C-9

Check whether the in-teraction between manda-tory and optional rela-tionships is correctly pro-cessed. Input feature hasmaximum commonality.

A

B

D

C

E

Feature=B

100%C-1C-2

C-10

Check whether the in-teraction between manda-tory and or-relationshipsis correctly processed. In-put feature has minimumcommonality.

A

B

E F

C D

G

Feature=F

66%C-1C-3

61

Page 67: FaMa Test Suite v1.0

ID Description Input Exp. Output Deps.

C-11

Check whether the in-teraction between manda-tory and or-relationshipsis correctly processed. In-put feature has maximumcommonality.

A

B

E F

C D

G

Feature=B

100%C-1C-3

C-12

Check whether the in-teraction between manda-tory and alternative rela-tionships is correctly pro-cessed. Input feature hasminimum commonality.

A

B

E F

C D

G

Feature=G

50%C-1C-4

C-13

Check whether the in-teraction between manda-tory and alternative rela-tionships is correctly pro-cessed. Input feature hasmaximum commonality.

A

B

E F

C D

G

Feature=B

100%C-1C-4

C-14

Check whether the in-teraction between manda-tory relationships and ’re-quires’ constraints is cor-rectly processed.

A

B C

Feature=B

100%C-1C-5C-6

62

Page 68: FaMa Test Suite v1.0

ID Description Input Exp. Output Deps.

C-15

Check whether the in-teraction between manda-tory relationships and ’ex-cludes’ constraints is cor-rectly processed.

A

B C

Feature=B

0%C-1C-7

C-16

Check whether the inter-action between optionaland or-relationships is cor-rectly processed. In-put feature has minimumcommonality.

A

B

E F

C D

G

Feature=G

40%C-2C-3

C-17

Check whether the inter-action between optionaland or-relationships is cor-rectly processed. In-put feature has maximumcommonality.

A

B

E F

C D

G

Feature=C

80%C-2C-3

C-18

Check whether the inter-action between optionaland alternative relation-ships is correctly pro-cessed. Input feature hasminimum commonality.

A

B

E F

C D

G

Feature=E

33%C-2C-4

63

Page 69: FaMa Test Suite v1.0

ID Description Input Exp. Output Deps.

C-19

Check whether the inter-action between optionaland alternative relation-ships is correctly pro-cessed. Input feature hasmaximum commonality.

A

B

E F

C D

G

Feature=D

66%C-2C-4

C-20

Check whether the inter-action between or- and al-ternative relationships iscorrectly processed. In-put feature has minimumcommonality.

A

D E

H I

B C

F G

Feature=C

25%C-3C-4

C-21

Check whether the inter-action between or- and al-ternative relationships iscorrectly processed. In-put feature has maximumcommonality.

A

D E

H I

B C

F G

Feature=E

80%C-3C-4

C-22

Check whether theinteraction betweenor-relationships and ’re-quires’ constraints iscorrectly processed. Inputfeature has minimumcommonality.

A

B C

Feature=B

50%C-3C-5C-6

64

Page 70: FaMa Test Suite v1.0

ID Description Input Exp. Output Deps.

C-23

Check whether theinteraction betweenor-relationships and ’re-quires’ constraints iscorrectly processed. Inputfeature has maximumcommonality.

A

B C

Feature=C

100%C-3C-5C-6

C-24

Check whether theinteraction betweenor-relationships and ’ex-cludes’ constraints iscorrectly processed.

A

B C

Feature=C

50%C-3C-7

C-25

Check whether the in-teraction between alterna-tive relationships and ’re-quires’ constraints is cor-rectly processed. In-put feature has minimumcommonality.

A

B C

Feature=B

0%C-4C-5C-6

C-26

Check whether the in-teraction between alterna-tive relationships and ’re-quires’ constraints is cor-rectly processed. In-put feature has maximumcommonality.

A

B C

Feature=C

100%C-4C-5C-6

65

Page 71: FaMa Test Suite v1.0

ID Description Input Exp. Output Deps.

C-27

Check whether the in-teraction between alterna-tive relationships and ’ex-cludes’ constraints is cor-rectly processed.

A

B C

Feature=B

50%C-4C-7

C-28

Check whether the inter-action between ’requires’and ’excludes’ constraintsis correctly processed. In-put feature has minimumcommonality.

A

B C

Feature=B

0%

C-2C-5C-6C-7

C-29

Check whether the inter-action between ’requires’and ’excludes’ constraintsis correctly processed. In-put feature has maximumcommonality.

A

B C

Feature=C

50%

C-2C-5C-6C-7

C-30

Check whether the in-teraction among threeor more different typesof relationships andconstraints is correctlyprocessed. Input featurehas minimum commonal-ity.

A

B

D E

C

F G

Feature=G

25%C-1,...,C-29

66

Page 72: FaMa Test Suite v1.0

ID Description Input Exp. Output Deps.

C-31

Check whether the in-teraction among threeor more different typesof relationships andconstraints is correctlyprocessed. Input featurehas maximum commonal-ity.

A

B

D E

C

F G

Feature=B

100%C-1,...,C-29

C-32Check whether non-existent features arecorrectly managed.

A

B

Feature=C

0% C-2

Table A.6: Operation Commonality. Test cases

A.7 Operation Dead Features

ID Description Input Exp. Output Deps.

DF-1

Check whether deadfeatures caused by an’excludes’ constraintsbetween a mandatoryfeature and an alternativechild feature are correctlydetected.

A

B C

D E

D

DF-2

Check whether dead fea-tures caused by the a’requires’ constraints be-tween a mandatory fea-ture and an alternativechild feature are correctlydetected.

A

B C

D E

E

67

Page 73: FaMa Test Suite v1.0

ID Description Input Exp. Output Deps.

DF-3

Check whether deadfeatures caused by an’excludes’ constraintsbetween a mandatoryfeature and one of thechild features of an or-relationship are correctlydetected.

A

B C

D E

D

DF-4

Check whether dead fea-tures caused by an ’ex-cludes’ constraint betweena mandatory and an op-tional feature are correctlydetected.

A

B C

C

DF-5

Check whether dead fea-tures caused by an ’ex-cludes’ constraint betweentwo mandatory featuresare correctly detected.

A

B C

A,B,C

DF-6

Check whether dead fea-tures caused by a ’re-quires’ constraint betweentwo alternative child fea-tures are correctly de-tected.

A

B C

B

DF-7

Check whether dead fea-tures caused by an ’ex-cludes’ constraint betweena parent and its childfeature are correctly de-tected.

A

B

A,B

68

Page 74: FaMa Test Suite v1.0

ID Description Input Exp. Output Deps.

DF-8

Check whether dead fea-tures caused by an ’ex-cludes’ and a ’requires’constraints between twooptional features are cor-rectly detected.

A

B CB

Table A.7: Operation DeadFeatures. Test cases

69

Page 75: FaMa Test Suite v1.0

Bibliography

[1] Draft IEEE Standard for software and system test documentation (Revisionof IEEE 829-1998). Technical report, 2007.

[2] D. Batory. Feature models, grammars, and propositional formulas. InSoftware Product Lines Conference, LNCS 3714, pages 7–20, 2005.

[3] D. Batory, D. Benavides, and A. Ruiz-Cortes. Automated analysisof feature models: Challenges ahead. Communications of the ACM,December:45–47, 2006.

[4] D. Batory, J. Sarvela, and A. Rauschmayer. Scaling step-wise refinement.IEEE Trans. Software Eng., 30(6):355–371, 2004.

[5] B. Beizer. Software testing techniques (2nd ed.). Van Nostrand ReinholdCo., New York, NY, USA, 1990.

[6] D. Benavides. On the Automated Analyisis of Software Product Lines us-ing Feature Models. A Framework for Developing Automated Tool Support.PhD thesis, University of Seville, 2007.

[7] D. Benavides, A. Ruiz-Cortes, and P. Trinidad. Automated reasoning onfeature models. LNCS, Advanced Information Systems Engineering: 17thInternational Conference, CAiSE 2005, 3520:491–503, 2005.

[8] D. Benavides, A. Ruiz-Cortes, P. Trinidad, and S. Segura. A survey onthe automated analyses of feture models. In Jornadas de Ingenierıa delSoftware y Bases de Datos (JISBD), 2006.

[9] D. Benavides, S. Segura, P. Trinidad, and A. Ruiz-Cortes. A first steptowards a framework for the automated analysis of feature models. InManaging Variability for Software Product Lines: Working With VariabilityMechanisms, 2006.

[10] F. Cao, B. Bryant, C. Burt, Z. Huang, R. Raje, A. Olson, and M. Auguston.Automating feature-oriented domain analysis. In International Conferenceon Software Engineering Research and Practice (SERP’03), pages 944–949,June 2003.

70

Page 76: FaMa Test Suite v1.0

[11] L. Copeland. A Practitioner’s Guide to Software Test Design. ArtechHouse, Inc., Norwood, MA, USA, 2003.

[12] K. Czarnecki, S. Helsen, and U.W. Eisenecker. Formalizing cardinality-based feature models and their specialization. Software Process: Improve-ment and Practice, 10(1):7–29, 2005.

[13] R.A. DeMillo, R.J. Lipton, and F.G. Sayward. Hints on test data selection:Help for the practicing programmer. IEEE Computer, 11(4):34–41, 1978.

[14] A. van Deursen and P. Klint. Domain–specific language design requiresfeature descriptions. Journal of Computing and Information Technology,10(1):1–17, 2002.

[15] S. Fan and N. Zhang. Feature model based on description logics. InKnowledge-Based Intelligent Information and Engineering Systems, pages1144–1151. 2006.

[16] Rohit Gheyi, Tiago Massoni, and Paulo Borba. A theory for feature modelsin alloy. In First Alloy Workshop, pages 71–80, Portland, United States,nov 2006.

[17] M. Grindal, J. Offutt, and S.F. Andler. Combination testing strategies: asurvey. Software Testing, Verification and Reliability, 15(3):167–199, 2005.

[18] M. Griss, J. Favaro, and M. d’Alessandro. Integrating feature modelingwith the RSEB. In Proceedings of theFifthInternational Conference onSoftware Reuse, pages 76–85, Canada, 1998.

[19] R. G. Hamlet. Testing programs with the aid of a compiler. IEEE Trans-action on Software Engineering, 3(4):279–290, 1977.

[20] K. Kang, S. Cohen, J. Hess, W. Novak, and S. Peterson. Feature–OrientedDomain Analysis (FODA) Feasibility Study. Technical Report CMU/SEI-90-TR-21, SEI, 1990.

[21] Yu-Seung Ma, Jeff Offutt, and Yong Rae Kwon. Mujava: an automatedclass mutation system: Research articles. Software Testesting, Verificationand Reliability, 15(2):97–133, 2005.

[22] M. Mannion. Using first-order logic for product line model validation.In Proceedings of the Second Software Product Line Conference (SPLC2),LNCS 2379, pages 176–187, San Diego, CA, 2002. Springer.

[23] G. J. Myers and C. Sandler. The Art of Software Testing. John Wiley &Sons, 2004.

[24] J. Pena, M. Hinchey, A. Ruiz-Cortes, and P. Trinidad. Building the corearchitecture of a multiagent system product line: With an example froma future nasa mission. In 7th International Workshop on Agent OrientedSoftware Engineering. LNCS, 2006.

71

Page 77: FaMa Test Suite v1.0

[25] X. Peng, W. Zhao, Y. Xue, and Y. Wu. Ontology-based feature modelingand application-oriented tailoring. In ICSR, pages 87–100, 2006.

[26] R.S. Pressman. Software Engineering: A Practitioner’s Approach.McGrap-Hill, fifth edition, 2001.

[27] P. Schobbens, P. Heymans, J. Trigaux, and Y. Bontemps. Feature Dia-grams: A Survey and A Formal Semantics. In Proceedings of the 14th IEEEInternational Requirements Engineering Conference (RE’06), Minneapolis,Minnesota, USA, September 2006.

[28] P. Schobbens, J.C. Trigaux P. Heymans, and Y. Bontemps. Generic seman-tics of feature diagrams. Computer Networks, 51(2):456–479, Feb 2006.

[29] S. Segura, D. Benavides, and A. Ruiz-Cortes. Functional testing of featuremodel analysis tools. A first step. In 5th Software Product Lines TestingWorkshop (SPLiT 2008). SPLC’08, Limerick, Ireland, September 2008. Inpress.

[30] J. Sun, H. Zhang, Y.F. Li, and H. Wang. Formal semantics and verificationfor feature modeling. In Proceedings of the ICECSS05, 2005.

[31] P. Trinidad, D. Benavides, A. Duran, A. Ruiz-Cortes, and M. Toro. Au-tomated error analysis for the agilization of feature modeling. Journal ofSystems and Software, 81(6):883–896, 2008.

[32] H. Wang, Y. Li, J. Sun, H. Zhang, and J. Pan. A semantic web approach tofeature modeling and verification. In Workshop on Semantic Web EnabledSoftware Engineering (SWESE’05), November 2005.

[33] W. Zhang, H. Zhao, and H. Mei. A propositional logic-based method forverification of feature models. In J. Davies, editor, ICFEM 2004, volume3308, pages 115–130. Springer–Verlag, 2004.

[34] H. Zhu, P. Hall, and J. May. Software unit test coverage and adequacy.ACM Comput. Surv., 29(4):366–427, 1997.

72