ACM-TOSEM

download ACM-TOSEM

If you can't read please download the document

Transcript of ACM-TOSEM

Reference Type: Journal Article Record Number: 138 Year: 2001 Title: Designing data marts for data warehouses Journal: ACM Trans. Softw. Eng. Methodol. Volume: 10 Issue: 4 Pages: 452-483 Short Title: Designing data marts for data warehouses ISSN: 1049-331X DOI: 10.1145/384189.384190 Legal Note: 384190 Abstract: Data warehouses are databases devoted to analytical processing. They are used to support decision-making activities in most modern business settings, when complex data sets have to be studied and analyzed. The technology for analytical processing assumes that data are presented in the form of simple data marts, consisting of a well-identified collection of facts and data analysis dimensions (star schema). Despite the wide diffusion of data warehouse technology and concepts, we still miss methods that help and guide the designer in identifying and extracting such data marts out of an enterprisewide information system, covering the upstream, requirement-driven stages of the design process. Many existing methods and tools support the activities related to the efficient implementation of data marts on top of specialized technology (such as the ROLAP or MOLAP data servers). This paper presents a method to support the identification and design of data marts. The method is based on three basic steps. A first top-down step makes it possible to elicit and consolidate user requirements and expectations. This is accomplished by exploiting a goal-oriented process based on the Goal/Question/Metric paradigm developed at the University of Maryland. Ideal data marts are derived from user requirements. The second bottom-up step extracts candidate data marts Reference Type: Journal Article Record Number: 126 Year: 2002 Title: Temporal abstract classes and virtual temporal specifications for real-time systems Journal: ACM Trans. Softw. Eng. Methodol. Volume: 11 Issue: 3 Pages: 291-308 Short Title: Temporal abstract classes and virtual temporal specifications for real-time systems ISSN: 1049-331X DOI: 10.1145/567793.567794 Legal Note: 567794 Abstract: The design and development of real-time systems is often a difficult and time-

consuming task. System realization has become increasingly difficult due to the proliferation of larger and more complex applications. To offset some of these difficulties, real-time developers have turned to object-oriented methodology. The success of object-oriented concepts in the development of non-real-time programs motivates the relevance of these concepts to achieve similar gains from encapsulation and code reuse in the real-time domain. This article presents an approach of integrating real-time constraint specifications within the constructs of an object-oriented language, affording these constraints a status equivalent to other language elements. This has led to the definition of such novel concepts as temporal abstract classes, virtual temporal constraints, and temporal specification inheritance, which extends inheritance mechanisms to accommodate real-time constraint specifications. These extensions provide real-time developers with the ability to manage and maintain the temporal behavior of a real-time program in a comparable manner to its functional behavior. Reference Type: Journal Article Record Number: 117 Year: 2003 Title: Editorial Journal: ACM Trans. Softw. Eng. Methodol. Volume: 12 Issue: 1 Pages: 1-2 Short Title: Editorial ISSN: 1049-331X DOI: 10.1145/839268.839269 Legal Note: 839269 Reference Type: Journal Article Record Number: 90 Year: 2005 Title: Editorial Journal: ACM Trans. Softw. Eng. Methodol. Volume: 14 Issue: 2 Pages: 119-123 Short Title: Editorial ISSN: 1049-331X DOI: 10.1145/1061254.1061255 Legal Note: 1061255 Reference Type: Journal Article Record Number: 52 Year: 2007

Title: Editorial Journal: ACM Trans. Softw. Eng. Methodol. Volume: 17 Issue: 1 Pages: 1-2 Short Title: Editorial ISSN: 1049-331X DOI: 10.1145/1314493.1314494 Legal Note: 1314494 Reference Type: Journal Article Record Number: 152 Year: 2007 Title: Editorial Journal: ACM Trans. Softw. Eng. Methodol. Volume: 17 Issue: 1 Pages: 1-2 Short Title: Editorial ISSN: 1049-331X DOI: 10.1145/1314493.1314494 Legal Note: 1314494 Reference Type: Journal Article Record Number: 40 Year: 2008 Title: Editorial Journal: ACM Trans. Softw. Eng. Methodol. Volume: 17 Issue: 3 Pages: 1-2 Short Title: Editorial ISSN: 1049-331X DOI: 10.1145/1363102.1363103 Legal Note: 1363103 Reference Type: Journal Article Record Number: 154 Year: 2008 Title: Editorial Journal: ACM Trans. Softw. Eng. Methodol. Volume: 17 Issue: 3

Pages: 1-2 Short Title: Editorial ISSN: 1049-331X DOI: 10.1145/1363102.1363103 Legal Note: 1363103 Reference Type: Journal Article Record Number: 155 Year: 2008 Title: Editorial Journal: ACM Trans. Softw. Eng. Methodol. Volume: 17 Issue: 2 Pages: 1-1 Short Title: Editorial ISSN: 1049-331X DOI: 10.1145/1348250.1348251 Legal Note: 1348251 Reference Type: Journal Article Record Number: 46 Year: 2008 Title: Introduction to the special section from the ACM international symposium on software testing and analysis (ISSTA 2006) Journal: ACM Trans. Softw. Eng. Methodol. Volume: 17 Issue: 2 Pages: 1-2 Short Title: Introduction to the special section from the ACM international symposium on software testing and analysis (ISSTA 2006) ISSN: 1049-331X DOI: 10.1145/1348250.1348252 Legal Note: 1348252 Reference Type: Journal Article Record Number: 156 Year: 2008 Title: Introduction to the special section from the ACM international symposium on software testing and analysis (ISSTA 2006) Journal: ACM Trans. Softw. Eng. Methodol. Volume: 17 Issue: 2 Pages: 1-2

Short Title: Introduction to the special section from the ACM international symposium on software testing and analysis (ISSTA 2006) ISSN: 1049-331X DOI: 10.1145/1348250.1348252 Legal Note: 1348252 Reference Type: Journal Article Record Number: 24 Year: 2009 Title: Editorial Journal: ACM Trans. Softw. Eng. Methodol. Volume: 18 Issue: 3 Pages: 1-2 Short Title: Editorial ISSN: 1049-331X DOI: 10.1145/1525880.1525881 Legal Note: 1525881 Reference Type: Journal Article Record Number: 11 Year: 2010 Title: Editorial Journal: ACM Trans. Softw. Eng. Methodol. Volume: 19 Issue: 3 Pages: 1-2 Short Title: Editorial ISSN: 1049-331X DOI: 10.1145/1656250.1656251 Legal Note: 1656251 Reference Type: Journal Article Record Number: 317 Author: Abhik Roychoudhury, Ankit Goel and B. Sengupta Year: 2012 Title: Symbolic Message Sequence Charts Journal: ACM Trans. Softw. Eng. Methodol. Volume: 21 Issue: 2 Pages: 1-44 Short Title: Symbolic Message Sequence Charts ISSN: 1049-331X

DOI: 10.1145/2089116.2089122 Keywords: Design Design tools and techniques Languages message Sequence charts requirements/specifications Symbolic execution Test generation Unified modeling language verification Abstract: Message sequence charts (MSCs) are a widely used visual formalism for scenario-based specifications of distributed reactive systems. In its conventional usage, an MSC captures an interaction snippet between concrete objects in the system. This leads to voluminous specifications when the system contains several objects that are behaviorally similar. MSCs also play an important role in the model-based testing of reactive systems, where they may be used for specifying (partial) system behaviors, describing test generation criteria, or representing test cases. However, since the number of processes in a MSC specification are fixed, model-based testing of systems consisting of process classes may involve a significant amount of rework: for example, reconstructing system models, or regenerating test cases for systems differing only in the number of processes of various types. In this article we propose a scenario-based notation, called symbolic message sequence charts (SMSCs), for modeling, simulation, and testing of process classes. SMSCs are a lightweight syntactic and semantic extension of MSCs where, unlike MSCs, a SMSC lifeline can denote some/all objects from a collection. Our extensions give us substantially more modeling power. Moreover, we present an abstract execution semantics for (structured collections of) SMSCs. This allows us to validate MSC-based system models capturing interactions between large, or even unbounded, number of objects. Finally, we describe a SMSC-based testing methodology for process classes, which allows generation of test cases for new object configurations with minimal rework. Since our SMSC extensions are only concerned with MSC lifelines, we believe that they can be integrated into existing standards such as UML 2.0. We illustrate our SMSCbased framework for modeling, simulation, and testing of process classes using a weather-update controller case-study from NASA. Reference Type: Journal Article Record Number: 102 Author: T. Akgul and V. J. M. III Year: 2004 Title: Assembly instruction level reverse execution for debugging Journal: ACM Trans. Softw. Eng. Methodol. Volume: 13 Issue: 2 Pages: 149-198 Short Title: Assembly instruction level reverse execution for debugging

ISSN: 1049-331X DOI: 10.1145/1018210.1018211 Legal Note: 1018211 Abstract: Assembly instruction level reverse execution provides a programmer with the ability to return a program to a previous state in its execution history via execution of a "reverse program." The ability to execute a program in reverse is advantageous for shortening software development time. Conventional techniques for recovering a state rely on saving the state into a record before the state is destroyed. However, statesaving causes significant memory and time overheads during forward execution.The proposed method introduces a reverse execution methodology at the assembly instruction level with low memory and time overheads. The methodology generates, from a program, a reverse program by which a destroyed state is almost always regenerated rather than being restored from a record. This significantly reduces statesaving.The methodology has been implemented on a PowerPC processor with a custom made debugger. As compared to previous work, all of which heavily use statesaving techniques, the experimental results show from 2X to 2206X reduction in runtime memory usage, from 1.5X to 403X reduction in forward execution time overhead and from 1.2X to 2.32X reduction in forward execution time for the tested benchmarks. Furthermore, due to the reduction in memory usage, our method can provide reverse execution in many cases where other methods run out of available memory. However, for cases where there is enough memory available, our method results in 1.16X to 1.89X slow down in reverse execution. Reference Type: Journal Article Record Number: 321 Author: Alessandro Fantechi, Stefania Gnesi, Alessandro Lapadula, Franco Mazzanti, Rosario Pugliese and F. Tiezzi Year: 2012 Title: A logical verification methodology for service-oriented computing Journal: ACM Trans. Softw. Eng. Methodol. Volume: 21 Issue: 3 Pages: 1-46 Short Title: A logical verification methodology for service-oriented computing ISSN: 1049-331X DOI: 10.1145/2211616.2211619 Keywords: Formal methods Model checking Model checking process Semantics Service-oriented computing Syntax Temporal logic Theory verification Web services

Abstract: We introduce a logical verification methodology for checking behavioral properties of service-oriented computing systems. Service properties are described by means of SocL, a branching-time temporal logic that we have specifically designed for expressing in an effective way distinctive aspects of services, such as, acceptance of a request, provision of a response, correlation among service requests and responses, etc. Our approach allows service properties to be expressed in such a way that they can be independent of service domains and specifications. We show an instantiation of our general methodology that uses the formal language COWS to conveniently specify services and the expressly developed software tool CMC to assist the user in the task of verifying SocL formulas over service specifications. We demonstrate the feasibility and effectiveness of our methodology by means of the specification and analysis of a case study in the automotive domain. Reference Type: Journal Article Record Number: 160 Author: Ali Ebnenasir and S. S. Kulkarni Year: 2011 Title: Feasibility of Stepwise Design of Multitolerant Programs Journal: ACM Trans. Softw. Eng. Methodol. Volume: 21 Issue: 1 Short Title: Feasibility of Stepwise Design of Multitolerant Programs ISSN: 1049-331X DOI: 10.1145/2063239.2063240 Keywords: Automatic addition of fault tolerance Abstract: The complexity of designing programs that simultaneously tolerate multiple classes of faults, called multitolerant programs, is in part due to the conflicting nature of the fault tolerance requirements that must be met by a multitolerant program when different types of faults occur. To facilitate the design of multitolerant programs, we present sound and (deterministically) complete algorithms for stepwise design of two families of multitolerant programs in a high atomicity program model, where a process can read and write all program variables in an atomic step. We illustrate that if one needs to design failsafe (respectively, nonmasking) fault tolerance for one class of faults and masking fault tolerance for another class of faults, then a multitolerant program can be designed in separate polynomial-time (in the state space of the faultintolerant program) steps regardless of the order of addition. This result has a significant methodological implication in that designers need not be concerned about unknown fault tolerance requirements that may arise due to unanticipated types of faults. Further, we illustrate that if one needs to design failsafe fault tolerance for one class of faults and nonmasking fault tolerance for a different class of faults, then the resulting problem is NP-complete in program state space. This is a counterintuitive result in that designing failsafe and nonmasking fault tolerance for the same class of faults can be done in polynomial time. We also present sufficient conditions for polynomial-time design of failsafe-nonmasking multitolerance. Finally, we demonstrate the stepwise design of multitolerance for a stable disk storage system, a token ring network protocol and a

repetitive agreement protocol that tolerates Byzantine and transient faults. Our automatic approach decreases the design time from days to a few hours for the token ring program that is our largest example with 200 million reachable states and 8 processes. Reference Type: Journal Article Record Number: 315 Author: Anders Mattsson, Brian Fitzgerald, Bjrn Lundell and B. Lings Year: 2012 Title: An Approach for Modeling Architectural Design Rules in UML and its Application to Embedded Software Journal: ACM Trans. Softw. Eng. Methodol. Volume: 21 Issue: 2 Pages: 1-29 Short Title: An Approach for Modeling Architectural Design Rules in UML and its Application to Embedded Software ISSN: 1049-331X DOI: 10.1145/2089116.2089120 Keywords: Computer-aided software engineering Design documentation Embedded Software development Human factors Languages Model-driven development Model-driven engineering Object-oriented design Methods Tools Abstract: Current techniques for modeling software architecture do not provide sufficient support for modeling architectural design rules. This is a problem in the context of model-driven development in which it is assumed that major design artifacts are represented as formal or semi-formal models. This article addresses this problem by presenting an approach to modeling architectural design rules in UML at the abstraction level of the meaning of the rules. The high abstraction level and the use of UML makes the rules both amenable to automation and easy to understand for both architects and developers, which is crucial to deployment in an organization. To provide a proof-ofconcept, a tool was developed that validates a system model against the architectural rules in a separate UML model. To demonstrate the feasibility of the approach, the architectural design rules of an existing live industrial-strength system were modeled according to the approach. Reference Type: Journal Article Record Number: 318

Author: Anna Queralt and E. Teniente Year: 2012 Title: Verification and Validation of UML Conceptual Schemas with OCL Constraints Journal: ACM Trans. Softw. Eng. Methodol. Volume: 21 Issue: 2 Pages: 1-41 Short Title: Verification and Validation of UML Conceptual Schemas with OCL Constraints ISSN: 1049-331X DOI: 10.1145/2089116.2089123 Keywords: Conceptual modeling Constraints design Ocl Requirements/specifications uml verification Abstract: To ensure the quality of an information system, it is essential that the conceptual schema that represents the knowledge about its domain is semantically correct. The semantic correctness of a conceptual schema can be seen from two different perspectives. On the one hand, from the point of view of its definition, a conceptual schema must be right. This is ensured by means of verification techniques that check whether the schema satisfies several correctness properties. On the other hand, from the point of view of the requirements that the information system should satisfy, a schema must also be the right one. This is ensured by means of validation techniques, which help the designer understand the exact meaning of a schema and to see whether it corresponds to the requirements. In this article we propose an approach to verify and validate UML conceptual schemas, with arbitrary constraints formalized in OCL. We have also implemented our approach to show its feasibility. Reference Type: Journal Article Record Number: 120 Author: Ant\, \#243, n. Lopes, M. Wermelinger, Jos\, \#233 and L. Fiadeiro Year: 2003 Title: Higher-order architectural connectors Journal: ACM Trans. Softw. Eng. Methodol. Volume: 12 Issue: 1 Pages: 64-104 Short Title: Higher-order architectural connectors ISSN: 1049-331X DOI: 10.1145/839268.839272 Legal Note: 839272 Abstract: We develop a notion of higher-order connector towards supporting the systematic construction of architectural connectors for software design. A higher-order connector takes connectors as parameters and allows for services such as security protocols and fault-tolerance mechanisms to be superposed over the interactions that

are handled by the connectors passed as actual arguments. The notion is first illustrated over CommUnity, a parallel program design language that we have been using for formalizing aspects of architectural design. A formal, algebraic semantics is then presented which is independent of any Architectural Description Language. Finally, we discuss how our results can impact software design methods and tools. Reference Type: Journal Article Record Number: 162 Author: M. Arnold, M. Vechev and E. Yahav Year: 2011 Title: QVM: An Efficient Runtime for Detecting Defects in Deployed Systems Journal: ACM Trans. Softw. Eng. Methodol. Volume: 21 Issue: 1 Short Title: QVM: An Efficient Runtime for Detecting Defects in Deployed Systems ISSN: 1049-331X DOI: 10.1145/2063239.2063241 Keywords: Debugging, Diagnosis heap assertions Reliability run-time environments Testing and debugging Typestate virtual machines Abstract: Coping with software defects that occur in the post-deployment stage is a challenging problem: bugs may occur only when the system uses a specific configuration and only under certain usage scenarios. Nevertheless, halting production systems until the bug is tracked and fixed is often impossible. Thus, developers have to try to reproduce the bug in laboratory conditions. Often the reproduction of the bug consists of the lion share of the debugging effort. In this paper we suggest an approach to address the aforementioned problem by using a specialized runtime environment (QVM, for Quality Virtual Machine). QVM efficiently detects defects by continuously monitoring the execution of the application in a production setting. QVM enables the efficient checking of violations of user-specified correctness properties, e.g., typestate safety properties, Java assertions, and heap properties pertaining to ownership. QVM is markedly different from existing techniques for continuous monitoring by using a novel overhead manager which enforces a user-specified overhead budget for quality checks. Existing tools for error detection in the field usually disrupt the operation of the deployed system. QVM, on the other hand, provides a balanced trade off between the cost of the monitoring process and the maintenance of sufficient accuracy for detecting defects. Specifically, the overhead cost of using QVM instead of a standard JVM, is low enough to be acceptable in production environments. We implemented QVM on top of IBMs J9 Java Virtual Machine and used it to detect and fix various errors in realworld applications. Reference Type: Journal Article Record Number: 58

Author: L. Baresi and S. Morasca Year: 2007 Title: Three empirical studies on estimating the design effort of Web applications Journal: ACM Trans. Softw. Eng. Methodol. Volume: 16 Issue: 4 Pages: 15 Short Title: Three empirical studies on estimating the design effort of Web applications ISSN: 1049-331X DOI: 10.1145/1276933.1276936 Legal Note: 1276936 Abstract: Our research focuses on the effort needed for designing modern Web applications. The design effort is an important part of the total development effort, since the implementation can be partially automated by tools. We carried out three empirical studies with students of advanced university classes enrolled in engineering and communication sciences curricula. The empirical studies are based on the use of W2000, a special-purpose design notation for the design of Web applications, but the hypotheses and results may apply to a wider class of modeling notations (e.g., OOHDM, WebML, or UWE). We started by investigating the relative importance of each design activity. We then assessed the accuracy of a priori design effort predictions and the influence of a few process-related factors on the effort needed for each design activity. We also analyzed the impact of attributes like the size and complexity of W2000 design artifacts on the total effort needed to design the user experience of web applications. In addition, we carried out a finer-grain analysis, by studying which of these attributes impact the effort devoted to the steps of the design phase that are followed when using W2000. Reference Type: Journal Article Record Number: 96 Author: L. Baresi, M. Pezz\ and \#232 Year: 2005 Title: Formal interpreters for diagram notations Journal: ACM Trans. Softw. Eng. Methodol. Volume: 14 Issue: 1 Pages: 42-84 Short Title: Formal interpreters for diagram notations ISSN: 1049-331X DOI: 10.1145/1044834.1044836 Legal Note: 1044836 Abstract: The article proposes an approach for defining extensible and flexible formal interpreters for diagram notations with significant dynamic semantics. More precisely, it addresses semi-formal diagram notations that have precisely-defined syntax, but informally defined (dynamic) semantics. These notations are often flexible to fit the

different needs and expectations of users. Flexibility comes from the incompleteness or informality of the original definition and results in different interpretations.The approach defines interpreters by means of a mapping onto a semantic domain. Two sets of rules define the correspondences between the elements of the diagram notation and those of the semantic domain, and between events and states of the semantic domain and visual annotations on the elements of the diagram notation.Flexibility also leads to notation families, that is, sets of notations that share core concepts, but present slightly different interpretations. Existing approaches usually interpret these notations in isolation; the approach presented in this article allows the interpretation of a family as a whole. The feasibility of the approach is demonstrated through a prototype generator that allows users to implement special-purpose interpreters by defining relatively small sets of rules. Reference Type: Journal Article Record Number: 164 Author: L. Baresi, M. Pezz\ and \#232 Year: 2005 Title: Formal interpreters for diagram notations Journal: ACM Trans. Softw. Eng. Methodol. Volume: 14 Issue: 1 Pages: 42-84 Short Title: Formal interpreters for diagram notations ISSN: 1049-331X DOI: 10.1145/1044834.1044836 Legal Note: 1044836 Abstract: The article proposes an approach for defining extensible and flexible formal interpreters for diagram notations with significant dynamic semantics. More precisely, it addresses semi-formal diagram notations that have precisely-defined syntax, but informally defined (dynamic) semantics. These notations are often flexible to fit the different needs and expectations of users. Flexibility comes from the incompleteness or informality of the original definition and results in different interpretations.The approach defines interpreters by means of a mapping onto a semantic domain. Two sets of rules define the correspondences between the elements of the diagram notation and those of the semantic domain, and between events and states of the semantic domain and visual annotations on the elements of the diagram notation.Flexibility also leads to notation families, that is, sets of notations that share core concepts, but present slightly different interpretations. Existing approaches usually interpret these notations in isolation; the approach presented in this article allows the interpretation of a family as a whole. The feasibility of the approach is demonstrated through a prototype generator that allows users to implement special-purpose interpreters by defining relatively small sets of rules. Reference Type: Journal Article

Record Number: 165 Author: Barthlmy Dagenais and M. P. Robillard Year: 2011 Title: Recommending Adaptive Changes for Framework Evolution Journal: ACM Trans. Softw. Eng. Methodol. Volume: 20 Issue: 4 Short Title: Recommending Adaptive Changes for Framework Evolution ISSN: 1049-331X DOI: 10.1145/2000799.2000805 Keywords: Adaptive changes distribution Maintenance Enhancement Abstract: In the course of a frameworks evolution, changes ranging from a simple refactoring to a complete rearchitecture can break client programs. Finding suitable replacements for framework elements that were accessed by a client program and deleted as part of the frameworks evolution can be a challenging task. We present a recommendation system, SemDiff, that suggests adaptations to client programs by analyzing how a framework was adapted to its own changes. In a study of the evolution of one open source framework and three client programs, our approach recommended relevant adaptive changes with a high level of precision. In a second study of the evolution of two frameworks, we found that related change detection approaches were better at discovering systematic changes and that SemDiff was complementary to these approaches by detecting non-trivial changes such as when a functionality is imported from an external library. Reference Type: Journal Article Record Number: 81 Author: D. Basin, J\, \#252, r. Doser and T. Lodderstedt Year: 2006 Title: Model driven security: From UML models to access control infrastructures Journal: ACM Trans. Softw. Eng. Methodol. Volume: 15 Issue: 1 Pages: 39-91 Short Title: Model driven security: From UML models to access control infrastructures ISSN: 1049-331X DOI: 10.1145/1125808.1125810 Legal Note: 1125810 Abstract: We present a new approach to building secure systems. In our approach, which we call Model Driven Security, designers specify system models along with their security requirements and use tools to automatically generate system architectures from the models, including complete, configured access control infrastructures. Rather than fixing one particular modeling language for this process, we propose a general schema for constructing such languages that combines languages for modeling systems with

languages for modeling security. We present several instances of this schema that combine (both syntactically and semantically) different UML modeling languages with a security modeling language for formalizing access control requirements. From models in the combined languages, we automatically generate access control infrastructures for server-based applications, built from declarative and programmatic access control mechanisms. The modeling languages and generation process are semantically wellfounded and are based on an extension of Role-Based Access Control. We have implemented this approach in a UML-based CASE-tool and report on experiments. Reference Type: Journal Article Record Number: 61 Author: S. Basu and S. A. Smolka Year: 2007 Title: Model checking the Java metalocking algorithm Journal: ACM Trans. Softw. Eng. Methodol. Volume: 16 Issue: 3 Pages: 12 Short Title: Model checking the Java metalocking algorithm ISSN: 1049-331X DOI: 10.1145/1243987.1243990 Legal Note: 1243990 Abstract: We report on our efforts to use the XMC model checker to model and verify the Java metalocking algorithm. XMC [Ramakrishna et al. 1997] is a versatile and efficient model checker for systems specified in XL, a highly expressive value-passing language. Metalocking [Agesen et al. 1999] is a highly-optimized technique for ensuring mutually exclusive access by threads to object monitor queues and, therefore; plays an essential role in allowing Java to offer concurrent access to objects. Metalocking can be viewed as a two-tiered scheme. At the upper level, the metalock level, a thread waits until it can enqueue itself on an object's monitor queue in a mutually exclusive manner. At the lower level, the monitor-lock level, enqueued threads race to obtain exclusive access to the object. Our abstract XL specification of the metalocking algorithm is fully parameterized, both on the number of threads M, and the number of objects N. It also captures a sophisticated optimization of the basic metalocking algorithm known as extra-fast locking and unlocking of uncontended objects. Using XMC, we show that for a variety of values of M and N, the algorithm indeed provides mutual exclusion and freedom from deadlock and lockout at the metalock level. We also show that, while the monitor-lock level of the protocol preserves mutual exclusion and deadlock-freedom, it is not lockout-free because the protocol's designers chose to give equal preference to awaiting threads and newly arrived threads. Reference Type: Journal Article Record Number: 130 Author: D. Batory, C. Johnson, B. MacDonald and D. v. Heeder

Year: 2002 Title: Achieving extensibility through product-lines and domain-specific languages: a case study Journal: ACM Trans. Softw. Eng. Methodol. Volume: 11 Issue: 2 Pages: 191-214 Short Title: Achieving extensibility through product-lines and domain-specific languages: a case study ISSN: 1049-331X DOI: 10.1145/505145.505147 Legal Note: 505147 Abstract: This is a case study in the use of product-line architectures (PLAs) and domain-specific languages (DSLs) to design an extensible command-and-control simulator for Army fire support. The reusable components of our PLA are layers or "aspects" whose addition or removal simultaneously impacts the source code of multiple objects in multiple, distributed programs. The complexity of our component specifications is substantially reduced by using a DSL for defining and refining state machines, abstractions that are fundamental to simulators. We present preliminary results that show how our PLA and DSL synergistically produce a more flexible way of implementing state-machine-based simulators than is possible with a pure Java implementation. Reference Type: Journal Article Record Number: 169 Author: A. Bauer, M. Leucker and C. Schallhart Year: 2011 Title: Runtime Verification for LTL and TLTL Journal: ACM Trans. Softw. Eng. Methodol. Volume: 21 Issue: 4 Short Title: Runtime Verification for LTL and TLTL ISSN: 1049-331X DOI: 10.1145/2000799.2000800 Keywords: Assertion checkers Monitors Runtime verification Abstract: This paper studies runtime veri?cation of properties expressed either in lineartime temporal logic (LTL) or timed lineartime temporal logic (TLTL). It classi?es runtime veri?cation in identifying its distinguishing features to model checking and testing, respectively. It introduces a three-valued semantics (with truth values true, false, inconclusive) as an adequate interpretation as to whether a partial observation of a running system meets an LTL or TLTL property. For LTL, a conceptually simple monitor generation procedure is given, which is optimal in two respects: First, the size of the generated deterministic monitor is minimal, and, second,

the monitor identi?es a continuously monitored trace as either satisfying or falsifying a property as early as possible. The feasibility of the developed methodology is demonstrated using a collection of real-world temporal logic speci?cations. Moreover, the presented approach is related to the properties monitorable in general and is compared to existing concepts in the literature. It is shown that the set of monitorable properties does not only encompass the safety and co-safety properties but is strictly larger. For TLTL, the same road map is followed by ?rst de?ning a three-valued semantics. The corresponding construction of a timed monitor is more involved, yet, as shown, possible. Reference Type: Journal Article Record Number: 25 Author: L. Bauer, J. Ligatti and D. Walker Year: 2009 Title: Composing expressive runtime security policies Journal: ACM Trans. Softw. Eng. Methodol. Volume: 18 Issue: 3 Pages: 1-43 Short Title: Composing expressive runtime security policies ISSN: 1049-331X DOI: 10.1145/1525880.1525882 Legal Note: 1525882 Abstract: Program monitors enforce security policies by interposing themselves into the control flow of untrusted software whenever that software attempts to execute securityrelevant actions. At the point of interposition, a monitor has authority to permit or deny (perhaps conditionally) the untrusted software's attempted action. Program monitors are common security enforcement mechanisms and integral parts of operating systems, virtual machines, firewalls, network auditors, and antivirus and antispyware tools. Unfortunately, the runtime policies we require program monitors to enforce grow more complex, both as the monitored software is given new capabilities and as policies are refined in response to attacks and user feedback. We propose dealing with policy complexity by organizing policies in such a way as to make them composable, so that complex policies can be specified more simply as compositions of smaller subpolicy modules. We present a fully implemented language and system called Polymer that allows security engineers to specify and enforce composable policies on Java applications. We formalize the central workings of Polymer by defining an unambiguous semantics for our language. Using this formalization, we state and prove an uncircumventability theorem which guarantees that monitors will intercept all securityrelevant actions of untrusted software. Reference Type: Journal Article Record Number: 137

Author: J.-R. Beauvais, E. Rutten, T. Gautier, R. Houdebine, P. L. Guernic and Y.-M. Tang Year: 2001 Title: Modeling statecharts and activitycharts as signal equations Journal: ACM Trans. Softw. Eng. Methodol. Volume: 10 Issue: 4 Pages: 397-451 Short Title: Modeling statecharts and activitycharts as signal equations ISSN: 1049-331X DOI: 10.1145/384189.384191 Legal Note: 384191 Abstract: The languages for modeling reactive systems are of different styles, like the imperative, state-based ones and the declarative, data-flow ones. They are adapted to different application domains. This paper, through the example of the languages Statecharts and Signal, shows a way to give a model of an imperative specification (Statecharts) in a declarative, equational one (Signal). This model constitutes a formal model of the Statemate semantics of Statecharts, upon which formal analysis techniques can be applied. Being a transformation from an imperative to a declarative structure, it involves the definition of generic models for the explicit management of state (in the case of control as well as of data). In order to obtain a structural construction of the model, a hierarchical and modular organization is proposed, including proper management and propagation of control along the hierarchy. The results presented here cover the essential features of Statecharts as well as of another language of Statemate: Activitycharts. As a translation, it makes multiformalism specification possible, and provides support for the integrated operation of the languages. The motivation lies also in the perspective of gaining access to the various formal analysis and implementation tools of the synchronous technology, using the DC1 exchange format, as in the Sacres programming environment. Reference Type: Journal Article Record Number: 123 Author: M. Bernardo, P. Ciancarini and L. Donatiello Year: 2002 Title: Architecting families of software systems with process algebras Journal: ACM Trans. Softw. Eng. Methodol. Volume: 11 Issue: 4 Pages: 386-426 Short Title: Architecting families of software systems with process algebras ISSN: 1049-331X DOI: 10.1145/606612.606614 Legal Note: 606614 Abstract: Software components can give rise to several kinds of architectural mismatches when assembled together in order to form a software system. A formal

description of the architecture of the resulting component-based software system may help to detect such architectural mismatches and to single out the components that cause the mismatches. In this article, we concentrate on deadlock-related architectural mismatches arising from three different causes that we identify: incompatibility between two components due to a single interaction, incompatibility between two components due to the combination of several interactions, and lack of interoperability among a set of components forming a cyclic topology. We develop a process algebra-based architectural description language called PADL, which deals with all three causes through an architectural compatibility check and an architectural interoperability check relying on standard observational equivalences. The adequacy of the architectural compatibility check is assessed on a compressing proxy system, while the adequacy of the architectural interoperability check is assessed on a cruise control system. We then address the issue of scaling the architectural compatibility and interoperability checks to architectural styles through an extension of PADL. The formalization of an architectural style is complicated by the presence of two degrees of freedom within the set of instances of the style: variability of the internal behavior of the components and variability of the topology formed by the components. As a first step towards the solution of the problem, we propose an intermediate abstraction called architectural type, whose instances differ only for the internal behavior of their components. We define an efficient architectural conformity check based on a standard observational equivalence to verify whether an architecture is an instance of an architectural type. We show that all the architectures conforming to the same architectural type possess the same compatibility and interoperability properties. Reference Type: Journal Article Record Number: 91 Author: J. Berstel, S. C. Reghizzi, G. Roussel and P. S. Pietro Year: 2005 Title: A scalable formal method for design and automatic checking of user interfaces Journal: ACM Trans. Softw. Eng. Methodol. Volume: 14 Issue: 2 Pages: 124-167 Short Title: A scalable formal method for design and automatic checking of user interfaces ISSN: 1049-331X DOI: 10.1145/1061254.1061256 Legal Note: 1061256 Abstract: The article addresses the formal specification, design and implementation of the behavioral component of graphical user interfaces. The complex sequences of visual events and actions that constitute dialogs are specified by means of modular, communicating grammars called VEG (Visual Event Grammars), which extend traditional BNF grammars to make them more convenient to model dialogs.A VEG specification is independent of the actual layout of the GUI, but it can easily be integrated with various layout design toolkits. Moreover, a VEG specification may be

verified with the model checker SPIN, in order to test consistency and correctness, to detect deadlocks and unreachable states, and also to generate test cases for validation purposes.Efficient code is automatically generated by the VEG toolkit, based on compiler technology. Realistic applications have been specified, verified and implemented, like a Notepad-style editor, a graph construction library and a large real application to medical software. It is also argued that VEG can be used to specify and test voice interfaces and multimodal dialogs. The major contribution of our work is blending together a set of features coming from GUI design, compilers, software engineering and formal verification. Even though we do not claim novelty in each of the techniques adopted for VEG, they have been united into a toolkit supporting all GUI design phases, that is, specification, design, verification and validation, linking to applications and coding. Reference Type: Journal Article Record Number: 142 Author: J. Bible, G. Rothermel and D. S. Rosenblum Year: 2001 Title: A comparative study of coarse- and fine-grained safe regression test-selection techniques Journal: ACM Trans. Softw. Eng. Methodol. Volume: 10 Issue: 2 Pages: 149-183 Short Title: A comparative study of coarse- and fine-grained safe regression testselection techniques ISSN: 1049-331X DOI: 10.1145/367008.367015 Legal Note: 367015 Abstract: Regression test-selection techniques reduce the cost of regression testing by selecting a subset of an existing test suite to use in retesting a modified program. Over the past two decades, numerous regression test-selection techniques have been described in the literature. Initial empirical studies of some of these techniques have suggested that they can indeed benefit testers, but so far, few studies have empirically compared different techniques. In this paper, we present the results of a comparative empirical study of two safe regression test-selection techniques. The techniques we studied have been implemented as the tools DejaVu and TestTube; we compared these tools in terms of a cost model incorporating precision (ability to eliminate unnecessary test cases), analysis cost, and test execution cost. Our results indicate, that in many instances, despite its relative lack of precision, TestTube can reduce the time required for regression testing as much as the more precise DejaVu. In other instances, particularly where the time required to execute test cases is long, DejaVu's superior precision gives it a clear advantage over TestTube. Such variations in relative performance can complicate a tester's choice of which tool to use. Our experimental results suggest that a hybrid regression test-selection tool that combines features of TestTube and DejaVu may be an answer to these complications; we present an initial

case study that demonstrates the potential benefit of such a tool. Reference Type: Journal Article Record Number: 64 Author: D. Binkley, N. Gold and M. Harman Year: 2007 Title: An empirical study of static program slice size Journal: ACM Trans. Softw. Eng. Methodol. Volume: 16 Issue: 2 Pages: 8 Short Title: An empirical study of static program slice size ISSN: 1049-331X DOI: 10.1145/1217295.1217297 Legal Note: 1217297 Abstract: This article presents results from a study of all slices from 43 programs, ranging up to 136,000 lines of code in size. The study investigates the effect of five aspects that affect slice size. Three slicing algorithms are used to study two algorithmic aspects: calling-context treatment and slice granularity. The remaining three aspects affect the upstream dependencies considered by the slicer. These include collapsing structure fields, removal of dead code, and the influence of points-to analysis. The results show that for the most precise slicer, the average slice contains just under one-third of the program. Furthermore, ignoring calling context causes a 50% increase in slice size, and while (coarse-grained) function-level slices are 33% larger than corresponding statement-level slices, they may be useful predictors of the (finergrained) statement-level slice size. Finally, upstream analyses have an order of magnitude less influence on slice size. Reference Type: Journal Article Record Number: 72 Author: M. Brambilla, S. Ceri, P. Fraternali and I. Manolescu Year: 2006 Title: Process modeling in Web applications Journal: ACM Trans. Softw. Eng. Methodol. Volume: 15 Issue: 4 Pages: 360-409 Short Title: Process modeling in Web applications ISSN: 1049-331X DOI: 10.1145/1178625.1178627 Legal Note: 1178627 Abstract: While Web applications evolve towards ubiquitous, enterprise-wide or multienterprise information systems, they face new requirements, such as the capability

of managing complex processes spanning multiple users and organizations, by interconnecting software provided by different organizations. Significant efforts are currently being invested in application integration, to support the composition of business processes of different companies, so as to create complex, multiparty business scenarios. In this setting, Web applications, which were originally conceived to allow the user-to-system dialogue, are extended with Web services, which enable system-to-system interaction, and with process control primitives, which permit the implementation of the required business constraints. This article presents new Web engineering methods for the high-level specification of applications featuring business processes and remote services invocation. Process- and service-enabled Web applications benefit from the high-level modeling and automatic code generation techniques that have been fruitfully applied to conventional Web applications, broadening the class of Web applications that take advantage of these powerful software engineering techniques. All the concepts presented in this article are fully implemented within a CASE tool. Reference Type: Journal Article Record Number: 114 Author: M. G. J. v. d. Brand, P. Klint and J. J. Vinju Year: 2003 Title: Term rewriting with traversal functions Journal: ACM Trans. Softw. Eng. Methodol. Volume: 12 Issue: 2 Pages: 152-190 Short Title: Term rewriting with traversal functions ISSN: 1049-331X DOI: 10.1145/941566.941568 Legal Note: 941568 Abstract: Term rewriting is an appealing technique for performing program analysis and program transformation. Tree (term) traversal is frequently used but is not supported by standard term rewriting. We extend many-sorted, first-order term rewriting with traversal functions that automate tree traversal in a simple and type-safe way. Traversal functions can be bottom-up or top-down traversals and can either traverse all nodes in a tree or can stop the traversal at a certain depth as soon as a matching node is found. They can either define sort-preserving transformations or mappings to a fixed sort. We give small and somewhat larger examples of traversal functions and describe their operational semantics and implementation. An assessment of various applications and a discussion conclude the article. Reference Type: Journal Article Record Number: 30 Author: T. D. Breaux, A. I. Ant\, \#243 and J. Doyle Year: 2008

Title: Semantic parameterization: A process for modeling domain descriptions Journal: ACM Trans. Softw. Eng. Methodol. Volume: 18 Issue: 2 Pages: 1-27 Short Title: Semantic parameterization: A process for modeling domain descriptions ISSN: 1049-331X DOI: 10.1145/1416563.1416565 Legal Note: 1416565 Abstract: Software engineers must systematically account for the broad scope of environmental behavior, including nonfunctional requirements, intended to coordinate the actions of stakeholders and software systems. The Inquiry Cycle Model (ICM) provides engineers with a strategy to acquire and refine these requirements by having domain experts answer six questions: who, what, where, when, how, and why. Goalbased requirements engineering has led to the formalization of requirements to answer the ICM questions about when, how, and why goals are achieved, maintained, or avoided. In this article, we present a systematic process called Semantic Parameterization for expressing natural language domain descriptions of goals as specifications in description logic. The formalization of goals in description logic allows engineers to automate inquiries using who, what, and where questions, completing the formalization of the ICM questions. The contributions of this approach include new theory to conceptually compare and disambiguate goal specifications that enables querying goals and organizing goals into specialization hierarchies. The artifacts in the process include a dictionary that aligns the domain lexicon with unique concepts, distinguishing between synonyms and polysemes, and several natural language patterns that aid engineers in mapping common domain descriptions to formal specifications. Semantic Parameterization has been empirically validated in three case studies on policy and regulatory descriptions that govern information systems in the finance and health-care domains. Reference Type: Journal Article Record Number: 14 Author: A. Brogi, R. Popescu and M. Tanca Year: 2010 Title: Design and implementation of Sator: A web service aggregator Journal: ACM Trans. Softw. Eng. Methodol. Volume: 19 Issue: 3 Pages: 1-21 Short Title: Design and implementation of Sator: A web service aggregator ISSN: 1049-331X DOI: 10.1145/1656250.1656254 Legal Note: 1656254 Abstract: Our long-term objective is to develop a general methodology for deploying (Web) service aggregation and adaptation middleware, capable of suitably overcoming

syntactic and behavioral mismatches in view of application integration within and across organizational boundaries. This article focuses on describing the core aggregation process, which generates the workflow of a composite service from a set of service workflows to be aggregated and a data-flow mapping linking service parameters. Notes: Software Construction Tools > Program Editors Reference Type: Journal Article Record Number: 70 Author: M. Broy, I. H. Kr\, \#252, ger and M. Meisinger Year: 2007 Title: A formal model of services Journal: ACM Trans. Softw. Eng. Methodol. Volume: 16 Issue: 1 Pages: 5 Short Title: A formal model of services ISSN: 1049-331X DOI: 10.1145/1189748.1189753 Legal Note: 1189753 Abstract: Service-oriented software systems rapidly gain importance across application domains: They emphasize functionality (services), rather structural entities (components), as the basic building block for system composition. More specifically, services coordinate the interplay of components to accomplish specific tasks. In this article, we establish a foundation of service orientation: Based on the Focus theory of distributed systems (see Broy and Stlen [2001]), we introduce a theory and formal model of services. In Focus, systems are composed of interacting components. A component is a total behavior. We introduce a formal model of services where, in contrast, a service is a partial behavior. For services and components, we work out foundational specification techniques and outline methodological development steps. We show how services can be structured and how software architectures can be composed of services and components. Although our emphasis is on a theoretical foundation of the notion of services, we demonstrate utility of the concepts we introduce by means of a running example from the automotive domain. Reference Type: Journal Article Record Number: 181 Author: Changhai Nie and H. Leung Year: 2011 Title: The Minimal Failure-Causing Schema of Combinatorial Testing Journal: ACM Trans. Softw. Eng. Methodol. Volume: 20 Issue: 4

Short Title: The Minimal Failure-Causing Schema of Combinatorial Testing ISSN: 1049-331X DOI: 10.1145/2000799.2000801 Keywords: Combinatorial testing Failure diagnosis Abstract: Combinatorial Testing (CT) involves the design of a small test suite to cover the parameter value combinations so as to detect failures triggered by the interactions among these parameters. To make full use of CT and to extend its advantages, this article first gives a model of CT and then presents a theory of the Minimal Failurecausing Schema (MFS), including the concept of the MFS, proof of its existence, some of its properties, and a method of finding the MFS. Then we propose a methodology for CT based on this MFS theory and the existing research. Our MFS-based methodology emphasizes that CT should work on accurate testing requirements, and has the following advantages: 1) Detect failure to the greatest degree with the least cost. 2) Effectiveness is improved by emphasizing mining of the information in software and making full use of the information gained from test design and execution. 3) Determine the root causes of failures and reveal related faults near the exposed ones. 4) Provide a foundation and model for regression testing and software quality evaluation of CT. A case study is presented to illustrate the MFS-based CT methodology, and an empirical study on a real software developed by us shows that the MFS really exists and the methodology based on MFS can considerably improve CT. Reference Type: Journal Article Record Number: 108 Author: M. Chechik, B. Devereux, S. Easterbrook and A. Gurfinkel Year: 2003 Title: Multi-valued symbolic model-checking Journal: ACM Trans. Softw. Eng. Methodol. Volume: 12 Issue: 4 Pages: 371-408 Short Title: Multi-valued symbolic model-checking ISSN: 1049-331X DOI: 10.1145/990010.990011 Legal Note: 990011 Abstract: This article introduces the concept of multi-valued model-checking and describes a multi-valued symbolic model-checker, Chek. Multi-valued model-checking is a generalization of classical model-checking, useful for analyzing models that contain uncertainty (lack of essential information) or inconsistency (contradictory information, often occurring when information is gathered from multiple sources). Multi-valued logics support the explicit modeling of uncertainty and disagreement by providing additional truth values in the logic.This article provides a theoretical basis for multi-valued modelchecking and discusses some of its applications. A companion article [Chechik et al. 2002b] describes implementation issues in detail. The model-checker works for any member of a large class of multi-valued logics. Our modeling language is based on a

generalization of Kripke structures, where both atomic propositions and transitions between states may take any of the truth values of a given multi-valued logic. Properties are expressed in CTL, our multi-valued extension of the temporal logic CTL.We define the class of logics, present the theory of multi-valued sets and multi-valued relations used in our model-checking algorithm, and define the multi-valued extensions of CTL and Kripke structures. We explore the relationship between CTL and CTL, and provide a symbolic model-checking algorithm for CTL. We also address the use of fairness in multi-valued model-checking. Finally, we discuss some applications of the multi-valued model-checking approach. Reference Type: Journal Article Record Number: 10 Author: C. Chen, J. S. Dong, J. Sun and A. Martin Year: 2010 Title: A verification system for interval-based specification languages Journal: ACM Trans. Softw. Eng. Methodol. Volume: 19 Issue: 4 Pages: 1-36 Short Title: A verification system for interval-based specification languages ISSN: 1049-331X DOI: 10.1145/1734229.1734232 Legal Note: 1734232 Abstract: Interval-based specification languages have been used to formally model and rigorously reason about real-time computing systems. This usually involves logical reasoning and mathematical computation with respect to continuous or discrete time. When these systems are complex, analyzing their models by hand becomes error-prone and difficult. In this article, we develop a verification system to facilitate the formal analysis of interval-based specification languages with machine-assisted proof support. The verification system is developed using a generic theorem prover, Prototype Verification System (PVS). Our system elaborately encodes a highly expressive setbased notation, Timed Interval Calculus (TIC), and can rigorously carry out the verification of TIC models at an interval level. We validated all TIC reasoning rules and discovered subtle flaws in the original rules. We also apply TIC to model Duration Calculus (DC), which is a popular interval-based specification language, and thus expand the capacity of the verification system. We can check the correctness of DC axioms, and execute DC proofs in a manner similar to the corresponding pencil-andpaper DC arguments. Notes: Software Requirements Tools > Requirements Modeling Tools Reference Type: Journal Article Record Number: 146 Author: H. Y. Chen, T. H. Tse and T. Y. Chen Year: 2001

Title: TACCLE: a methodology for object-oriented software testing at the class and cluster levels Journal: ACM Trans. Softw. Eng. Methodol. Volume: 10 Issue: 1 Pages: 56-109 Short Title: TACCLE: a methodology for object-oriented software testing at the class and cluster levels ISSN: 1049-331X DOI: 10.1145/366378.366380 Legal Note: 366380 Abstract: Object-oriented programming consists of several different levels of abstraction, namely, the algorithmic level, class level, cluster level, and system level. The testing of object-oriented software at the algorithmic and system levels is similar to conventional program testing. Testing at the class and cluster levels poses new challenges. Since methods and objects may interact with one another with unforeseen combinations and invocations, they are much more complex to simulate and test than the hierarchy of functional calls in conventional programs. In this paper, we propose a methodology for object-oriented software testing at the class and cluster levels. In classlevel testing, it is essential to determine whether objects produced from the execution of implemented systems would preserve the properties defined by the specification, such as behavioral equivalence and nonequivalence. Our class-level testing methodology addresses both of these aspects. For the testing of behavioral equivalence, we propose to select fundamental pairs of equivalent ground terms as test cases using a black-box technique based on algebraic specifications, and then determine by means of a whitebox technique whether the objects resulting from executing such test cases are observationally equivalent. To address the testing of behavioral nonequivalence, we have identified and analyzed several nontrivial problems in the current literature. We propose to classify term equivalence into four types, thereby setting up new concepts and deriving important properties. Based on these results, we propose an approach to deal with the problems in the generation of nonequivalent ground terms as test cases. Relatively little research has contributed to cluster-level testing. In this paper, we also discuss black-box testing at the cluster level. We illustrate the feasibility of using contract, a formal specification language for the behavioral dependencies and interactions among cooperating objects of different classes in a given cluster. We propose an approach to test the interactions among different classes using every individual message-passing rule in the given Contract specification. We also present an approach to examine the interactions among composite message-passing sequences. We have developed four testing tools to support our methodology. Reference Type: Journal Article Record Number: 44 Author: T. Y. Chen and R. Merkel Year: 2008 Title: An upper bound on software testing effectiveness

Journal: ACM Trans. Softw. Eng. Methodol. Volume: 17 Issue: 3 Pages: 1-27 Short Title: An upper bound on software testing effectiveness ISSN: 1049-331X DOI: 10.1145/1363102.1363107 Legal Note: 1363107 Abstract: Failure patterns describe typical ways in which inputs revealing program failure are distributed across the input domainin many cases, clustered together in contiguous regions. Based on these observations several debug testing methods have been developed. We examine the upper bound of debug testing effectiveness improvements possible through making assumptions about the shape, size and orientation of failure patterns. We consider the bounds for testing strategies with respect to minimizing the F-measure, maximizing the P-measure, and maximizing the Emeasure. Surprisingly, we find that the empirically measured effectiveness of some existing methods that are not based on these assumptions is close to the theoretical upper bound of these strategies. The assumptions made to obtain the upper bound, and its further implications, are also examined. Reference Type: Journal Article Record Number: 319 Author: Christian Kstner, A Sven Apel, Thomas Thm and G. Saake Year: 2012 Title: Type checking annotation-based product lines Journal: ACM Trans. Softw. Eng. Methodol. Volume: 21 Issue: 3 Pages: 1-39 Short Title: Type checking annotation-based product lines ISSN: 1049-331X DOI: 10.1145/2211616.2211617 Keywords: Conditional compilation Design Featherweight Java Languages Preprocessors Program editors Software product lines Abstract: Software product line engineering is an efficient means of generating a family of program variants for a domain from a single code base. However, because of the potentially high number of possible program variants, it is difficult to test them all and ensure properties like type safety for the entire product line. We present a product-lineaware type system that can type check an entire software product line without generating each variant in isolation. Specifically, we extend the Featherweight Java

calculus with feature annotations for product-line development and prove formally that all program variants generated from a well typed product line are well typed. Furthermore, we present a solution to the problem of typing mutually exclusive features. We discuss how results from our formalization helped implement our own product-line tool CIDE for full Java and report of our experience with detecting type errors in four existing software product line implementations. Reference Type: Journal Article Record Number: 47 Author: J. M. Cobleigh, G. S. Avrunin and L. A. Clarke Year: 2008 Title: Breaking up is hard to do: An evaluation of automated assume-guarantee reasoning Journal: ACM Trans. Softw. Eng. Methodol. Volume: 17 Issue: 2 Pages: 1-52 Short Title: Breaking up is hard to do: An evaluation of automated assume-guarantee reasoning ISSN: 1049-331X DOI: 10.1145/1348250.1348253 Legal Note: 1348253 Abstract: Finite-state verification techniques are often hampered by the state-explosion problem. One proposed approach for addressing this problem is assume-guarantee reasoning, where a system under analysis is partitioned into subsystems and these subsystems are analyzed individually. By composing the results of these analyses, it can be determined whether or not the system satisfies a property. Because each subsystem is smaller than the whole system, analyzing each subsystem individually may reduce the overall cost of verification. Often the behavior of a subsystem is dependent on the subsystems with which it interacts, and thus it is usually necessary to provide assumptions about the environment in which a subsystem executes. Because developing assumptions has been a difficult manual task, the evaluation of assumeguarantee reasoning has been limited. Using recent advances for automatically generating assumptions, we undertook a study to determine if assume-guarantee reasoning provides an advantage over monolithic verification. In this study, we considered all two-way decompositions for a set of systems and properties, using two different verifiers, FLAVERS and LTSA. By increasing the number of repeated tasks in these systems, we evaluated the decompositions as they were scaled. We found that in only a few cases can assume-guarantee reasoning verify properties on larger systems than monolithic verification can, and in these cases the systems that can be analyzed are only a few sizes larger. Although these results are discouraging, they provide insight about research directions that should be pursued and highlight the importance of experimental evaluation in this area.

Reference Type: Journal Article Record Number: 113 Author: A. Coen-Porisini, M. Pradella, M. Rossi and D. Mandrioli Year: 2003 Title: A formal approach for designing CORBA-based applications Journal: ACM Trans. Softw. Eng. Methodol. Volume: 12 Issue: 2 Pages: 107-151 Short Title: A formal approach for designing CORBA-based applications ISSN: 1049-331X DOI: 10.1145/941566.941567 Legal Note: 941567 Abstract: The design of distributed applications in a CORBA-based environment can be carried out by means of an incremental approach, which starts from the specification and leads to the high-level architectural design. This article discusses a methodology to transform a formal specification written in TRIO into a high-level design document written in an extension of TRIO, named TRIO/CORBA (TC). The TC language is suited to formally describe the high-level architecture of a CORBA-based application. As a result, designers are offered high-level concepts that precisely define the architectural elements of an application. Furthermore, TC offers mechanisms to extend its base semantics, and can be adapted to future developments and enhancements in the CORBA standard. The methodology and the associated language are presented through a case study derived from a real Supervision and Control System. Reference Type: Journal Article Record Number: 111 Author: Y. Cohen and Y. A. Feldman Year: 2003 Title: Automatic high-quality reengineering of database programs by abstraction, transformation and reimplementation Journal: ACM Trans. Softw. Eng. Methodol. Volume: 12 Issue: 3 Pages: 285-316 Short Title: Automatic high-quality reengineering of database programs by abstraction, transformation and reimplementation ISSN: 1049-331X DOI: 10.1145/958961.958962 Legal Note: 958962 Abstract: Old-generation database models, such as the indexed-sequential, hierarchical, or network models, provide record-level access to their data, with all application logic residing in the hosting program. In contrast, relational databases can perform complex operations, such as filter, aggregation, and join, on multiple records without an external specification of the record-access logic. Programs written for

relational databases attempt to move as much of the application logic as possible into the database, in order to make the most of the optimizations performed internally by the database.This conceptual gap between the programming styles makes automatic highquality translation of programs written for the older database models to the relational model difficult. It is not enough to convert just the database-access operations, since this would result in unacceptably inefficient programs. It is necessary to convert parts of the application logic from the procedural style of the hosting program (which is almost always Cobol) to the declarative style of SQL.This article describes an automatic system, called MIDAS, that performs high-quality reengineering of legacy database programs in this way. MIDAS is based on the paradigm of translation by abstraction, transformation, and reimplementation. The abstract representation is based on the Plan Calculus, with the addition of Query Graphs, introduced in this article in order to abstract the temporal behavior of database access patterns.The results of MIDAS's translation were found to be superior to those of the naive translation that only converts databaseaccess operations in terms of readability, size of code, speed, and network data traffic. Initial industrial experience with MIDAS also demonstrates the high quality of its translations on large-scale programs. Reference Type: Journal Article Record Number: 6 Author: K. Conboy and B. Fitzgerald Year: 2010 Title: Method and developer characteristics for effective agile method tailoring: A study of XP expert opinion Journal: ACM Trans. Softw. Eng. Methodol. Volume: 20 Issue: 1 Pages: 1-30 Short Title: Method and developer characteristics for effective agile method tailoring: A study of XP expert opinion ISSN: 1049-331X DOI: 10.1145/1767751.1767753 Legal Note: 1767753 Abstract: It has long been acknowledged that software methods should be tailored if they are to achieve optimum effect. However comparatively little research has been carried out to date on this topic in general, and more notably, on agile methods in particular. This dearth of evidence in the case of agile methods is especially significant in that it is reasonable to expect that such methods would particularly lend themselves to tailoring. In this research, we present a framework based on interviews with 20 senior software development researchers and a review of the extant literature. The framework is comprised of two sets of factorscharacteristics of the method, and developer practicesthat can improve method tailoring effectiveness. Drawing on the framework, we then interviewed 16 expert XP practitioners to examine the current state and effectiveness of XP tailoring efforts, and to shed light on issues the framework identified as being important. The article concludes with a set of recommendations for research

and practice that would advance our understanding of the method tailoring area. Reference Type: Journal Article Record Number: 99 Author: G. Costagliola, V. Deufemia and G. Polese Year: 2004 Title: A framework for modeling and implementing visual notations with applications to software engineering Journal: ACM Trans. Softw. Eng. Methodol. Volume: 13 Issue: 4 Pages: 431-487 Short Title: A framework for modeling and implementing visual notations with applications to software engineering ISSN: 1049-331X DOI: 10.1145/1040291.1040293 Legal Note: 1040293 Abstract: We present a framework for modeling visual notations and for generating the corresponding visual programming environments. The framework can be used for modeling the diagrammatic notations of software development methodologies, and to generate visual programming environments with CASE tools functionalities. This is accomplished through an underlying modeling process based on the visual notation syntactic model of eXtended Positional Grammars (XPG, for short), and the associated parsing methodology, XpLR. In particular, the process requires the modeling of the basic elements (visual symbols) of a visual notation, their syntactic properties, the relations between them, the syntactic rules to formally define the set of feasible visual sentences, and a set of semantic routines performing additional checks and translation tasks. Such a process is completely supported by the VLDesk system, which enables the automatic generation of an editor for drawing visual sentences, as well as a processor for their recognition, parsing, and translation into other notations.The proposed framework also provides the basis for the definition of a meta-CASE technology. In fact, we can customize the generated visual programming environment in terms of the supported visual notation, its syntactic properties, and the translation rules. We have used this framework to model several diagrammatic notations used in software development methodologies, including those of the Unified Modeling Language. Reference Type: Journal Article Record Number: 77 Author: S. Counsell, S. Swift and J. Crampton Year: 2006 Title: The interpretation and utility of three cohesion metrics for object-oriented design Journal: ACM Trans. Softw. Eng. Methodol. Volume: 15 Issue: 2

Pages: 123-149 Short Title: The interpretation and utility of three cohesion metrics for object-oriented design ISSN: 1049-331X DOI: 10.1145/1131421.1131422 Legal Note: 1131422 Abstract: The concept of cohesion in a class has been the subject of various recent empirical studies and has been measured using many different metrics. In the structured programming paradigm, the software engineering community has adopted an informal yet meaningful and understandable definition of cohesion based on the work of Yourdon and Constantine. The object-oriented (OO) paradigm has formalised various cohesion measures, but the argument over the most meaningful of those metrics continues to be debated. Yet achieving highly cohesive software is fundamental to its comprehension and thus its maintainability. In this article we subject two object-oriented cohesion metrics, CAMC and NHD, to a rigorous mathematical analysis in order to better understand and interpret them. This analysis enables us to offer substantial arguments for preferring the NHD metric to CAMC as a measure of cohesion. Furthermore, we provide a complete understanding of the behaviour of these metrics, enabling us to attach a meaning to the values calculated by the CAMC and NHD metrics. In addition, we introduce a variant of the NHD metric and demonstrate that it has several advantages over CAMC and NHD. While it may be true that a generally accepted formal and informal definition of cohesion continues to elude the OO software engineering community, there seems considerable value in being able to compare, contrast, and interpret metrics which attempt to measure the same features of software. Reference Type: Journal Article Record Number: 48 Author: C. Csallner, Y. Smaragdakis and T. Xie Year: 2008 Title: DSD-Crasher: A hybrid analysis tool for bug finding Journal: ACM Trans. Softw. Eng. Methodol. Volume: 17 Issue: 2 Pages: 1-37 Short Title: DSD-Crasher: A hybrid analysis tool for bug finding ISSN: 1049-331X DOI: 10.1145/1348250.1348254 Legal Note: 1348254 Abstract: DSD-Crasher is a bug finding tool that follows a three-step approach to program analysis: D. Capture the program's intended execution behavior with dynamic invariant detection. The derived invariants exclude many unwanted values from the program's input domain. S. Statically analyze the program within the restricted input domain to explore many

paths. D. Automatically generate test cases that focus on reproducing the predictions of the static analysis. Thereby confirmed results are feasible. This three-step approach yields benefits compared to past two-step combinations in the literature. In our evaluation with third-party applications, we demonstrate higher precision over tools that lack a dynamic step and higher efficiency over tools that lack a static step. Notes: Software Testing Tools > Test Generators Reference Type: Journal Article Record Number: 314 Author: Dario Fischbein, Nicolas DIppolito, Greg Brunet, Marsha Chechik and S. Uchitel Year: 2012 Title: Weak Alphabet Merging of Partial Behavior Models Journal: ACM Trans. Softw. Eng. Methodol. Volume: 21 Issue: 2 Pages: 1-47 Short Title: Weak Alphabet Merging of Partial Behavior Models ISSN: 1049-331X DOI: 10.1145/2089116.2089119 Keywords: Algorithms Design Merge mts Partial behavior Models Requirements/specifications Temporal Logic Theory Abstract: Constructing comprehensive operational models of intended system behavior is a complex and costly task, which can be mitigated by the construction of partial behavior models, providing early feedback and subsequently elaborating them iteratively. However, how should partial behavior models with different viewpoints covering different aspects of behavior be composed? How should partial models of component instances of the same type be put together? In this article, we propose model merging of modal transition systems (MTSs) as a solution to these questions. MTS models are a natural extension of labelled transition systems that support explicit modeling of what is currently unknown about system behavior. We formally define model merging based on weak alphabet refinement, which guarantees property preservation, and show that merging consistent models is a process that should result

in a minimal common weak alphabet refinement (MCR). In this article, we provide theoretical results and algorithms that support such a process. Finally, because in practice MTS merging is likely to be combined with other operations over MTSs such as parallel composition, we also study the algebraic properties of merging and apply these, together with the algorithms that support MTS merging, in a case study. Reference Type: Journal Article Record Number: 93 Author: E. M. Dashofy, Andr\, \#233, v. d. Hoek and R. N. Taylor Year: 2005 Title: A comprehensive approach for the development of modular software architecture description languages Journal: ACM Trans. Softw. Eng. Methodol. Volume: 14 Issue: 2 Pages: 199-245 Short Title: A comprehensive approach for the development of modular software architecture description languages ISSN: 1049-331X DOI: 10.1145/1061254.1061258 Legal Note: 1061258 Abstract: Research over the past decade has revealed that modeling software architecture at the level of components and connectors is useful in a growing variety of contexts. This has led to the development of a plethora of notations for representing software architectures, each focusing on different aspects of the systems being modeled. In general, these notations have been developed without regard to reuse or extension. This makes the effort in adapting an existing notation to a new purpose commensurate with developing a new notation from scratch. To address this problem, we have developed an approach that allows for the rapid construction of new architecture description languages (ADLs). Our approach is unique because it encapsulates ADL features in modules that are composed to form ADLs. We achieve this by leveraging the extension mechanisms provided by XML and XML schemas. We have defined a set of generic, reusable ADL modules called xADL 2.0, useful as an ADL by itself, but also extensible to support new applications and domains. To support this extensibility, we have developed a set of reflective syntax-based tools that adapt to language changes automatically, as well as several semantically-aware tools that provide support for advanced features of xADL 2.0. We demonstrate the effectiveness, scalability, and flexibility of our approach through a diverse set of experiences. First, our approach has been applied in industrial contexts, modeling software architectures for aircraft software and spacecraft systems. Second, we show how xADL 2.0 can be extended to support the modeling features found in two different representations for modeling product-line architectures. Finally, we show how our infrastructure has been used to support its own development. The technical contribution of our infrastructure is augmented by several research contributions: the first decomposition of an architecture description language into modules, insights about how to develop new language

modules and a process for integrating them, and insights about the roles of different kinds of tools in a modular ADL-based infrastructure. Reference Type: Journal Article Record Number: 194 Author: David W. Binkley, Mark Harman and K. Lakhotia Year: 2011 Title: FlagRemover: A testability transformation for transforming loop-assigned flags Journal: ACM Trans. Softw. Eng. Methodol. Volume: 20 Issue: 3 Short Title: FlagRemover: A testability transformation for transforming loop-assigned flags ISSN: 1049-331X DOI: 10.1145/2000791.2000796 Keywords: Empirical evaluation Evolutionary testing Testability transformation Testing and debugging Abstract: SearchBased Testing is a widely studied technique for automatically generating test inputs, with the aim of reducing the cost of software engineering activities that rely upon testing. However, searchbased approaches degenerate to random testing in the presence of flag variables, because flags create spikes and plateaux in the fitness landscape. Both these features are known to denote hard optimization problems for all searchbased optimization techniques. Several authors have studied flag removal transformations and fitness function refinements to address the issue of flags, but the problem of loopassigned flags remains unsolved. This paper introduces a testability transformation along with a tool that transforms programs with loopassigned flags into flagfree equivalents, so that existing searchbased test data generation approaches can successfully be applied. The paper presents the results of an empirical study that demonstrates the effectiveness and efficiency of the testability transformation on programs including those made up of open source and industrial production code, as well as test data generation problems specifically created to denote hard optimization problems. Notes: Software Testing Tools> Test evaluation tools Reference Type: Journal Article Record Number: 324 Author: Dawei Qi, Abhik Roychoudhury, Zhenkai Liang and K. Vaswani Year: 2012 Title: An approach to debugging evolving programs Journal: ACM Trans. Softw. Eng. Methodol. Volume: 21 Issue: 3

Pages: 1-29 Short Title: An approach to debugging evolving programs ISSN: 1049-331X DOI: 10.1145/2211616.2211622 Keywords: Debuggers Debugging Aids Experimentation Reliability Software debugging Software evolution Symbolic execution Abstract: Bugs in programs are often introduced when programs evolve from a stable version to a new version. In this article, we propose a new approach called DARWIN for automatically finding potential root causes of such bugs. Given two programsa reference program and a modified programand an input that fails on the modified program, our approach uses symbolic execution to automatically synthesize a new input that (a) is very similar to the failing input and (b) does not fail. We find the potential cause(s) of failure by comparing control-flow behavior of the passing and failing inputs and identifying code fragments where the control flows diverge. A notable feature of our approach is that it handles hard-to-explain bugs, like code missing errors, by pointing to code in the reference program. We have implemented this approach and conducted experiments using several real-world applications, such as the Apache Web server, libPNG (a library for manipulating PNG images), and TCPflow (a program for displaying data sent through TCP connections). In each of these applications, DARWIN was able to localize bugs with high accuracy. Even though these applications contain several thousands of lines of code, DARWIN could usually narrow down the potential root cause(s) to less than ten lines. In addition, we find that the inputs synthesized by DARWIN provide additional value by revealing other undiscovered errors. Reference Type: Journal Article Record Number: 325 Author: Dawei Qi, Abhik Roychoudhury, Zhenkai Liang and K. Vaswani Year: 2012 Title: DARWIN: An Approach for Debugging Evolving Programs Journal: ACM Trans. Softw. Eng. Methodol. Volume: 21 Issue: 3 Short Title: DARWIN: An Approach for Debugging Evolving Programs ISSN: 1049-331X DOI: 10.1145/2211616.2211622 Keywords: debuggers debugging aids

experimentation reliability software debugging software evolution symbolic execution symbolic execution Abstract: Bugs in programs are often introduced when programs evolve from a stable version to a new version. In this article, we propose a new approach called DARWIN for automatically finding potential root causes of such bugs. Given two programsa reference program and a modified programand an input that fails on the modified program, our approach uses symbolic execution to automatically synthesize a new input that (a) is very similar to the failing input and (b) does not fail. We find the potential cause(s) of failure by comparing control-flow behavior of the passing and failing inputs and identifying code fragments where the control flows diverge. A notable feature of our approach is that it handles hard-to-explain bugs, like code missing errors, by pointing to code in the reference program. We have implemented this approach and conducted experiments using several real-world applications, such as the Apache Web server, libPNG (a library for manipulating PNG images), and TCPflow (a program for displaying data sent through TCP connections). In each of these applications, DARWIN was able to localize bugs with high accuracy. Even though these applications contain several thousands of lines of code, DARWIN could usually narrow down the potential root cause(s) to less than ten lines. In addition, we find that the inputs synthesized by DARWIN provide additional value by revealing other undiscovered errors. Reference Type: Journal Article Record Number: 17 Author: N. Desai, A. K. Chopra and M. P. Singh Year: 2009 Title: Amoeba: A methodology for modeling and evolving cross-organizational business processes Journal: ACM Trans. Softw. Eng. Methodol. Volume: 19 Issue: 2 Pages: 1-45 Short Title: Amoeba: A methodology for modeling and evolving cross-organizational business processes ISSN: 1049-331X DOI: 10.1145/1571629.1571632 Legal Note: 1571632 Abstract: Business service engagements involve processes that extend across two or more autonomous organizations. Because of regulatory and competitive reasons, requirements for cross-organizational business processes often evolve in subtle ways. The changes may concern the business transactions supported by a process, the organizational structure of the parties participating in the process, or the contextual policies that apply to the process. Current business process modeling approaches

handle such changes in an ad hoc manner, and lack a principled means for determining what needs to be changed and where. Cross-organizational settings exacerbate the shortcomings of traditional approaches because changes in one organization can potentially affect the workings of another. This article describes Amoeba, a methodology for business processes that is based on business protocols. Protocols capture the business meaning of interactions among autonomous parties via commitments. Amoeba includes guidelines for (1) specifying cross-organizational processes using business protocols, and (2) handling the evolution of requirements via a novel application of protocol composition. This article evaluates Amoeba using enhancements of a real-life business scenario of auto-insurance claim processing, and an aerospace case study. Reference Type: Journal Article Record Number: 316 Author: Devdatta Kulkarni, Tanvir Ahmed and A. Tripathi Year: 2012 Title: A Generative Programming Framework for Context-Aware CSCW Applications Journal: ACM Trans. Softw. Eng. Methodol. Volume: 21 Issue: 2 Pages: 1-35 Short Title: A Generative Programming Framework for Context-Aware CSCW Applications ISSN: 1049-331X DOI: 10.1145/2089116.2089121 Keywords: Access controls Context-aware computing design Abstract: We present a programming framework based on the paradigm of generative application development for building context-aware collaborative applications. In this approach, context-aware applications are implemented using a domain-specific design model, and their execution environment is generated and maintained by the middleware. The key features of this design model include support for context-based service discovery and binding, context-based access control, context-based multiuser coordination, and context-triggered automated task executions. The middleware uses the technique of policy-based specialization for generating application-specific middleware components from the generic middleware components. Through a casestudy example, we demonstrate this approach and present the evaluations of the design model and the middleware. Reference Type: Journal Article Record Number: 7 Author: E. Duala-Ekoko and M. P. Robillard Year: 2010

Title: Clone region descriptors: Representing and tracking duplication in source code Journal: ACM Trans. Softw. Eng. Methodol. Volume: 20 Issue: 1 Pages: 1-31 Short Title: Clone region descriptors: Representing and tracking duplication in source code ISSN: 1049-331X DOI: 10.1145/1767751.1767754 Legal Note: 1767754 Abstract: Source code duplication, commonly known as code cloning, is considered an obstacle to software maintenance because changes to a cloned region often require consistent changes to other regions of the source code. Research has provided evidence that the elimination of clones may not always be practical, feasible, or costeffective. We present a clone management approach that describes clone regions in a robust way that is independent from the exact text of clone regions or their location in a file, and that provides support for tracking clones in evolving software. Our technique relies on the concept of abstract clone region descriptors (CRDs), which describe clone regions using a combination of their syntactic, structural, and lexical