THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

download THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

of 203

Transcript of THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    1/203

    THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN FOR

    COMPUTER INTEGRATED MANUFACTURING (CIM)

    by

    SHWU-YAN CHANG SCOGGINS, B.L., M.B.A.

    A DISSERTATION

    IN

    INDUSTRIAL ENGINEERING

    Submitted to the Graduate Faculty

    of Texas Tech University in

    Partial Fulfillment of

    the Requirements for

    the Degree of

    DOCTOR OF PHILOSOPHY

    Approved

    December, 1986

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    2/203

      ^^ Mr-'i

    ©1987

    S H W U Y A N C H A N G S C O G G I N S

    All Rights Reserved

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    3/203

    ^01

    ACKNOWLEDGEMENTS

    I wish to express my deep appreciation for the invaluable assis

    tance and directing which Dr. William M. Marcy and Dr. Kathleen A.

    Hennessey have given during all phases of the research. I also want to

    thank to the other members of my committee. Dr. James R. Burns, Dr.

    Milton L. Smith, and Dr. Eric L. Blair, for their continuous interest,

    advice and constructive criticism.

    My gratitude also goes to Joe L. Selan and Cher-Kang Chua for long

    and tedious hours of proof reading and the drawing of figures. Lastly,

    I want to thank my husband, Mark, for his support, collecting of refe

    rences, and beneficial problem discussion.

    ii

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    4/203

    ABSTRACT

    This paper investigates the methodologies of system analysis and

    design for a CIM system from the software engineer's point of view. The

    hypotheses of this research are: 1) particular methodologies are likely

    to be suitable for a specific application system, 2) a combination of

    methodologies generally can make analysis and design more complete, and

    3) analysis of their characteristics can be used to select a methodology

    capable of providing system specifications for software development and

    system implementation.

    To confirm the hypotheses, nine design methodologies are chosen to

    analyze five application systems. Each methodology and application

    system has its own characteristics. If the hypotheses are true, it will

    be possible to match the characteristics of the methodologies with

    corresponding characteristics of a particular system. Also, once the

    methodologies are used, they should yield information that provides a

    set of usable system specifications, and lead to a successful program

    ming environment and implementation of the system.

    The nine methodologies are SD (Structured

     Design),

      MSR (Meta Step

    wise

     Refinement),

     WOD (Warnier-Orr

     Design),

      TDD (Top-Down

     Design),

     MJSD

    (Michael Jackson Structured

      Design),

      SADT (Structured Analysis and

    Design

     Technique),

      PSL/PSA (Problem Statement Language/Analyzer), HOS

    (Higher Order Software), and HIPO (Hierarchy-Input-Process-Output). The

    iii

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    5/203

    five application systems are an overall CIM system, shop floor control

    subsystem, product design subsystem, production planning/scheduling sub

    system, and inventory control subsystem. The characteristics of the

    methodologies include: system complexity, data structures, data flow,

    functional structures, process flow, decoupling structure clash recogni

    tion,

      logical control, and data flow control. The characteristics of

    the application systems include: system complexity, functional struc

    tures,

      process flow, data structures, logical control, data flow

    control,

     cohesion, and coupling.

    The contributions of this research include a technique for applying

    Information Technology to manufacturing information problems, and a set

    of rules for combination of different methodologies to improve the

    results of analysis and design efforts.

    iv

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    6/203

    CONTENTS

    ACKNOWLEDGEMENTS ii

    ABSTRACT iii

    LIST OF FIGURES ix

    LIST OF TABLES xi

    CHAPTER

    1. INTRODUCTION 1

    2.

     LITERATURE REVIEW: SYSTEM ANALYSIS AND DESIGN TECHNIQUES 9

    2.1. Stages Of System Development 10

    2.2. Basic Principles Of Software Engineering 11

    2.3. Basic Problems Of System Theory 12

    2.4.

     System Analysis And Design Criteria 13

    2.4.1. Measures of Complexity 14

    2.4.1.1. Modularity 15

    2.4.1.2. Coupling 16

    2.4.1.3. Cohesion 16

    2.4.2. Representation of Complexity 17

    2.4.2.1. Software Matrics 17

    2.4.2.2. Diagrams 20

    2.4.3. Approaches to Successful System Design 22

    2.4.4.

     System Characteristics and Design Components .... 24

    2.5. Criteria for Analysis and Design Methodologies 25

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    7/203

    2.6. Design Methodologies 26

    2.6.1. Structured Design (SD) 27

    2.6.2.

     Meta Stepwise Refinement (MSR) 28

    2.6.3. Warnier-Orr Design (WOD) 30

    2.6.4. Top-Down Design (TDD) 32

    2.6.5. Michael Jackson Structured Design (MJSD) 35

    2.6.6. Problem Statement Language/Analyzer (PSL/PSA) ... 39

    2.6.7.

     Structured Analysis and Design Technique (SADT) . 41

    2.6.8. Higher Order Software (HOS) 44

    2.6.9. Hierarchy-Input-Process-Output (HIPO) 48

    2.7. Summary 50

    3. ANALYSIS OF COMPUTERIZED MANUFACTURING SYSTEMS 54

    3.1. Basic Concepts of Flexible Manufacturing Systems (FMS) . 54

    3.2.

     Flexible Manufacturing Systems vs Transfering Line 55

    3.3. Cellular Manufacturing (CM) 56

    3.4.

     Conponents of A Machining Cell 61

    3.5. Cell Operation And Cell Control 66

    3.6. Supporting Theories and Software 69

    3.6.1. CNC Standard Formats 69

    3.6.2. Group Technology (GT) and Group Scheduling 71

    3.6.3. Simulation 72

    3.6.4. Artifical Intelligence (AI) and Expert Systems .. 73

    3.6.5. Computer-Aided Statistical Quality Control

    (CSQC) 75

    3.6.6. Database Management 76

    VI

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    8/203

    3.6.7. Decision Support System (DSS) 78

    3.6.8. Inventory Control 80

    3.7. Summary 81

    4.

     CASE STUDY 84

    4.1. Structured Design (SD) 89

    4.2. Meta Stepwise Refinement (MSR) 91

    4.3. Warnier-Orr Design (WOD) 94

    4.4. Top-Down Design (TDD) 97

    4.5. Michael Jackson System Design (MJSD) 100

    4.6. Problem Statement Language/Analysis (PSL/PSA) 104

    4.7. Structure Analysis and Design Technique (SADT) 109

    4.8. Higher Order Software (HOS) 109

    4.9. Hierarchical Input Process Output (HIPO) 118

    4.10. Conclusions 118

    5. COMPARISON OF DESIGN METHODOLOGIES VS APLLICATION SYSTEMS ... 124

    5.1. The Application Systems 125

    5.2. The Relationship between Design Methodologies and

    Application Systems 131

    6. CONCLUSIONS 143

    6.1. The Need for Industrial Information System Standards ... 143

    6.2. Problems with Information and Human Resources 145

    6.3. Contributions of this Research 146

    6.4. Summary of Characteristics 146

    6.5. Recommendations for Further Research 148

    BIBLIOGRAPHY 150

    vii

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    9/203

    APPENDICES

    A. CNC MILL, TRIAC 167

    B. CNC LATHE, ORAC 172

    C. PSEUDOCODES FOR FMC IN A MULTITASKING ENVIRONMENT 182

    Vlll

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    10/203

    LIST OF FIGURES

    1. Shop Flow Control Context Diagram 86

    2. Data Flow Diagram for SD (By Yourdon, Demarco method) 90

    3. Warnier-Orr Data Structure Diagram 96

    4. Warnier-Orr Process Structure Diagram 98

    5. Top-Down Design (TDD) Diagram 99

    6. Michael Jackson Structured Design (MJSD) Data Step Diagram ... 101

    7.

     Michael Jackson Structured Design (MJSD) Program Step Diagram 102

    8. PSL/PSA System Flowchart 105

    9. SADT Level AO IDEFO Diagram 110

    10. SADT Level A2 IDEFO Diagram Ill

    11. SADT Level A3 IDEFO Diagram 112

    12.

     SADT Level A4 IDEFO Diagram 113

    13. SADT Level A31 IDEFO Diagram 114

    14.

     SADT Level A32 IDEFO Diagram 115

    15. SADT Level A33 IDEFO Diagram 116

    16. SADT Level A34 IDEFO Diagram 117

    17.

     Higher-Order Software (HOS) Diagram 119

    18. Flexible Manufacturing Cell HIPO Diagram 120

    19. CNC Mill HIPO Diagram 121

    20. CNC Lathe HIPO Diagram 122

    21.

     Overall CIM System Context Diagram 126

    IX

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    11/203

    22.

     Product Design Context Diagram 129

    23.

     Production Planning Context Diagram 130

    24.

     Inventory Control Context Diagram 132

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    12/203

    LIST OF TABLES

    1. Narrative, PSL Name, And PSL Object Type 106

    2.

     Relationship Between Two Objects 107

    3. Characteristics of Design Methodologies 133

    4. Characteristics of CIM Application Systems 135

    5• Design Methodologies vs. Application Systems 1^0

    xi

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    13/203

    CHAPTER 1

    INTRODUCTION

    In 1976 the first Flexible Manufacturing System (FMS) was installed

    in the United States, resulting in great achievements in manufacturing

    development. Achievements of FMS, including reduced work-in-process,

    quicker change-over items, reduced stock levels, faster throughput

    times,

     better response to customer demands, consistent product quality,

    and lower unit cost, have caused it to be termed "the second industrial

    revolution".

    Computer Integrated Manufacturing (CIM) is a term introduced after

    FMS

      [177]*.

      Manufacturers recognize that integration has to be achieved

    not only in the factory, but also in almost every department and all

    functions of a manufacturing organization. Currently, few manufacturers

    fully implement CIM in the United States, but predictions indicate that

    expenditures on computerized factory automation systems in the United

    States will climb from $5 billion in I983 to as high as $40 billion in

    1995 [ 57] .

    However, the technology required to design and implement CIM is in

    an early stage of development. One cannot turn traditional manufac-

    * Note: In 1974, Dr. Joseph Harrington coined the concept "Computer

    Integrated Manufacturing  (CIM).

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    14/203

    turing equipment into modern Computer Numeric Control

      (CNC),

      Direct

    Numeric Control  (DNC),  or Adaptive Control (AC) machines. If computer

    ized machines come from different vendors or different models from the

    same vendor, there are still problems in both the hardware communica

    tion and the software interfaces. The implementation of CIM cannot be

    done overnight; it requires an evolution instead of revolution.

    Most manufacturers do not know what to do to implement a CIM

    system. The lack of generic system analysis and design techniques makes

    it difficult to implement CIM systems. Many institutions and indivi

    duals have contributed to the research and development of CIM in the

    last decade. However, the current literature does not investigate the

    system analysis and design methodologies for CIM. Most of the papers

    either evaluate the software and hardware together for a particular

    system or focus on the functions of a CIM system; others concentrate

    on the improvement of machining; others comment on the methodologies of

    modelling production, such as scheduling rules, simulation and mathema

    tical programming. It is almost impossible to find information about

    software development for CIM, which may be attributable to CIM software

    being proprietary material of the companies involved in this area of

    research and development.

    The difficult part of CIM is that CIM is a matter of concept

    instead of technique. It is easy to train in technique but not in

    concept. The general concepts of CIM are as follows:

    1• Integration is combining the elements of a system to form a

    whole.

    2.

     The whole is greater than the sum of parts.

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    15/203

    3. The challenge of integration in automated systems is greater

    than the challenge of integration in conventional systems.

    An evaluation of methodologies of system analysis and design for a

    CIM system from the software engineer's point of view is needed. Unlike

    Manufacturing Automation Protocol (MAP) and Technical Office Protocol

    (TOP) which attempt to develop a set of standardized specifications that

    will enable a factory's computers and computerized equipment to communi

    cate with each other, this dissertation is concerned with the approaches

    of system analysis and the specification of software requirements.

    The success of automated manufacturing systems depends on low-cost,

    high-reliability software systems. Most work on areas of Information

    Technology (IT) is done for long-term payoffs, but not for short-term

    benefits.

      The academic researchers lack of large-scale software deve

    lopment experience is one of the reasons. Industrial support is growing

    rapidly for short term research, both within industrial labs and through

    support of university work.

    One of the fundamental problems of the IT areas is the management

    of complexity. Many of the real problems in software or hardware

    engineering show up only when one tackles large problems. As the

    complexity of systems increases, the degree of difficulty of program

    ming,

      testing and debugging increases exponentially. Traditionally,

    system analysts and designers follow qualitative guidelines. The new

    technique uses mathematical computations, such as software metrics,

    number of variables involved, or number of variables and comparisons, or

    number of nodes and paths.

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    16/203

    The knowledge and theories involved in operating a CIM are very

    broad and very complex. However, system analysts need not know all the

    technologies in detail, but they do need to know the organization struc

    ture,

      what functions are performed and in which department, how to

    synchronize all the functions, what the restrictions are, and what the

    input and output data will be. Integration of manufacturing includes not

    only hardware (machines, material-handling tools, cell controllers, and

    computers),

      but also software (technologies, programs, data, and func

    tions).

    Hardware integration means that all programmable devices communi

    cate with each other, rather than operating on a stand-alone basis.

    Integration of software allows data produced in one program module to be

    processed by any other program module inside the system, such as the

    data in a CAD/CAM (Computer-Aided Design/Manufacturing) database that

    can be used in simulation.

    The objectives of this research are as follows:

    . To integrate manufacturing and production technologies with

    information system theories.

    . To provide an organized approach to the application of the metho

    dologies of system analysis and design in computer integrated

    manufacturing systems.

    . To provide the decision rules in choosing methodologies.

    . To provide an approach leading from design of a complex system to

    multitasking programming.

    This research has three hypotheses. The first hypothesis states

    that each application system has its own characteristics as well as each

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    17/203

    methodology. There are certain methodologies suitable for a particular

    types of application system. Nine analysis and design methodologies are

    chosen to analyze five application systems. Should a match occur between

    the criteria of the methodologies and the characteristics of a particu

    lar system, the first hypothesis is accepted.

    The second hypothesis states that the combination of different

    methodologies always provides a better analysis and design technique. A

    good methodology should provide as much information as possible. But

    there is no methodology that includes all types of information, so the

    combination of different methodologies, according to the selection

    rules,  provides more information, and therefore, is a better analysis

    and design technique.

    The third hypothesis states that once the methodologies are used,

    they should yield information that provides a full set of usable system

    specifications, and lead to a successful programming environment and

    implementation of the system. Prior to this research, articles which

    evaluate the contribution of the methodologies to CIM programming

    techniques could not be found in the literature.

    In Chapter 2, the most commonly used methodologies are reclassified

    into nine system analysis and design methodologies. Distinctions

    between these methodologies are not always consistent. G. D. Bergland

    combined Meta Stepwise Refinement

      (MST),

      Michael Jackson's Structured

    Design (MJSD) and Warnier's Logical Construction of Programs/Systems

    (LCP/LCS) into the same category as Structured Design (SD) [25] (In

    this paper, Warnier's methodology is combined with Orr's as Warnier-Orr

    Design

      methodology).

      However, based on the design procedures.

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    18/203

    diagramming technique, data flow control, logic control, system struc

    ture, functional decomposition, and system specifications, these metho

    dologies are distinguished.

    The following nine methodologies are studied: Structured Design

    (SD), Meta Stepwise Refinement  (MSR), Warnier-Orr Design  (WOD), Top-Down

    Design

      (TDD),

     Michael Jackson's System Design

      (MJSD),

     Structured Analy

    sis and Design Technique

      (SADT),

      Problem Statement Language/Analyzer

    (PSL/PSA),

      Higher Order Software (HOS) and Hierarchy-Input-Process-

    Output

      (HIPO).

      The United States Air Force's Integrated Computer Aided

    Manufacturing Definition

      (IDEF),

      derived from SADT, gave SADT more

    refined descriptions and is very suitable for manufacturing system

    analysis.

    Each methodology has its weaknesss and strengths, and no one metho

    dology would be appropriate for every design problem. NBS (National

    Bureau of Standards) has established IGES (Initial Graphic Exchange

    Specification) and now Automated Manufacturing Research Facility (AMRF)

    as standards for industry. It is a natural trend to develop standards

    for industrial needs, and we should not be surprised to see that

    standards for system analysis and design would produce a methodology

    synchronizing all the standards into a complete software development

    technique, especially for CIM, in the future.

    Chapter 3 is an overview of general CIM systems, including

    components of a manufacturing

      cell,

     process control, theories in produc

    tion planning and design, and Decision Support Systems  (DSS).  Based on

    this information, further system analysis can be performed.

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    19/203

    Because CIM is a complex system, several design methodologies, such

    as SD, MSR, WOD, and MJSD, do not fit the overall CIM system analysis,

    but they might be good for a smaller scope of system analysis. So, this

    thesis evaluated these methodologies based on different application

    systems.

      Five application systems are used: 1) overall CIM system, 2)

    shop floor control subsystem, 3) product design subsystem, 4) production

    planning (scheduling) subsystem, and 5) inventory control subsystem. In

    Chapter 4, nine methodologies are applied only to one of the subsystems,

    a shop floor control subsystem. The shop floor control system in this

    chapter is actually a Flexible Manufacturing Cell (FMC) which is

    installed in the Industrial Engineering Department, Texas Tech Universi

    ty,

      It has three functional modules: CNC lathe, CNC milling machine,

    and robot. Each module has its own software and functions independent

    ly.

     In order to computerize the flexible manufacturing cell and optimize

    productivity, a computer was used to coordinate these three software

    systems.

    Application systems were first broken into modules, each module

    representing a complete and independent function. Only complete and

    independent modules of software and hardware that already exist and

    function on a stand-alone basis were used, so that the focus was on the

    interface of the modules.

    Chapter 5 not only summarizes the conclusions from Chapter 4, but

    also expands the methodologies to the other four system/subsystems. The

    characteristics of the application system are compared and contrasted to

    the representation criteria of the nine methodologies. Methods are

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    20/203

    8

    provided to select more than one methodology for a particular type of

    application system.

    The shop floor system has real-time multitasking processes (for

    example,

      CNC Lathe, CNC Mill, and robot can be operating simultaneous

    ly),  it must be implemented under a real-time multitasking executive.

    In this research, a commercial product (AMX86) was chosen. Chapter 5

    also demonstrates the program design from the information given in the

    system analysis and design methodologies and their diagrams. Pseudo

    code for higher level functions was written, leaving detailed program

    ming to be provided as required for each unit.

    Chapter 6 contains conclusions and recommendations for further

    research. It restates the issues in applying Information Technology to

    implement a CIM system and offers solutions to the problems. This

    research not only provides an analytical approach for applying Informa

    tion Technology to manufacturing areas, but also suggest rules to com

    bine different methodologies. It offers an insight to the relationship

    between the methodologies and the programming technique, and provides a

    framework for implementing a FMC in a multitasking environment.

    Further work beyond the scope of this research can be done to

    determine the limitation of complexity management for each methodology

    quantitatively and set up the system analysis and design standards and

    rules,  which are applicable to any type of system. More mathematical

    decision-making and automated design tools are needed to improve the

    accuracy, consistency, completeness, modifiability, efficiency, and

    applicability of system analysis and design techniques.

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    21/203

    CHAPTER 2

    LITERATURE REVIEW: SYSTEM ANALYSIS AND

    DESIGN TECHNIQUES

    System analysis consists of collecting, organizing, and evaluating

    facts about a system and the environment in which it operates. The

    objective of system analysis is to examine all aspects of the system and

    to establish a basis for designing and implementing a better system

    [63].

    System design essentially recognizes processes and defines the data

    content of their interfaces. System design is distinct from program

    design in that program design deduces the flow-of-control structure

    implicit in those interfaces. The ideal capability of a methodology is

    that system design tools should establish system processes in such a way

    that subsequent program design cannot invalidate these processes [94] ,

    Program design techniques are associated with resolving inconsis

    tencies and incompatibilities between required outputs and the inputs

    from which they must be derived. System design, on the other hand,

    seeks to establish a structured statement of what is to be accomplished

    in terms of products, functions, information, resources and timing.

    Nevertheless, some aspects of program design methodology can be seen to

    have counterparts in system design methodology, especially in terms of

    measurement are representation of complexity and design components.

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    22/203

    10

    System design is of critical importance, especially for large-scale

    projects.

      System techniques provide formal disciplines for increasing

    the probability of implementing systems characterized by high degrees of

    initial correctness, readability, and maintainability, and promotes

    practices that aid in the consistent and orderly development of a total

    software system on schedule and within budgetary constraints. These

    disciplines and practices are set forth as a set of rules to be applied

    during system development to eliminate the time spent in debugging the

    code,

     to increase understanding among those who come in contact with it,

    and to facilitate operation and alteration of the program as the

    requirements or program environment evolves,

    2,1,

     Stages of System Development

    Seven phases in the system life cycle are distinguished:

    Phase 1: Documentation of the existing system.

    Phase 2: The logical design.

    Phase 3: The physical design.

    Phase 4: Programming and procedure development.

    Phase 5: System test and implementation.

    Phase 6: Operation.

    Phase 7: Maintenance and modification,

    J, D, Couger stated [63] that the amount of costs and the alloca

    tion among phases of system development in the 1970s are different from

    the ones in the 1980s. The total cost of system development has increa

    sed. The primary increase in cost occurred in the early part of the

    system development cycle. In the 1970s, only 35 percent of development

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    23/203

    11

    cost occurred in phases through 3, and now it is approximately 55

    percent.

      However, better analysis and planning of systems results in

    lower costs in the subsequent phases. Unfortunately, improvement in

    system development techniques has not kept pace with the improvement in

    computing equipment and the increase in the complexity of systems.

    System analysts continued to use techniques developed during the era of

    first generation computers (1940 -

      1950),

      In the latter 1970s and early

    1980s,  the gap began to close. Techniques especially suited for analy

    sis and design of complex systems were developed. The fifth generation

    of system techniques has been developed almost in parallel with the

    fifth generation of hardware (in the late  1980s),

    2,2,

     Basic Principles of Software Engineering

    Software engineering is the study of software principles and their

    application to the development and maintenance of software systems.

    Software engineering includes the structured methodology and the

    collection of structured techniques as well as many other software

    methodologies and tools. The basic principles of software engineering

    are defined as follows:

    1, Principle of Abstraction: separate the concept from the reality,

    2,  Principle of Formality: follow a rigorous, methodical approach

    to solve a problem.

    3, Divide-and-Conquer Concept: divide a problem into a set of

    smaller, independent problems that are easier to solve.

    4, Hierarchical Ordering Concept: organize the components of a

    solution into a tree-like hierarchical structure. Then the

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    24/203

    12

    solution can be constructed level by level, each new level

    adding more detail.

    5. Principle of Hiding: hide nonessential information. Enable a

    module to see only the information needed for that module.

    6. Principle of Localization: group logically related items close

    together.

    7. Principle of Conceptual Integrity: follow a consistent design

    philosophy and architecture. It is the most important principle

    of software engineering,

    8. Principle of Completeness: insure the system completely meets

    all requirements, involving data, function, correctness and

    robustness

      (i.e,,

     system ability to recover from errors)

     [224],

    2,b, Basic Problems of System Theory

    Langefors stated [127] that the conditions or functions at the

    outer boundary (outer boundary is the boundary of that design; inner

    boundary is the set of subsystems) must be estimated by the designer of

    the system. The conditions or functions at the intermediate boundary

    (the boundary to the other subsystems) must not be estimated by the

    subsystem designer. Instead, delineation of the intermediate boundary

    should be derived in a formal fashion, by using system properties, from

    decisions made at the outer boundary, Langefors aims to provide formal

    methods of deriving intermediate boundary conditions from outer

     ones.

    However, there are imperceivable systems which cause people to

    neglect the importance or the existence of things that they are not able

    to see or perceive. The imperceivable system is defined by Langefors

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    25/203

    as a system in which the number of its parts and their interrelations is

    so high that its overall structure cannot be safely perceived or

    observed at one and the same time

      [127].

      Because imperceivability is

    subjective and varies from person to person, there is no means for

    proving it by logic. Such a proof would have to be based on other

    propositions. In the field of system analysis and design, designers do

    not know such propositions which could be more conveniently used as

    primaries in proving the correctness of the applications of the methodo

    logies.

      In this kind of research results usually are based on deductive

    statements (a method of formal analysis) instead of formal proofs.

    An efficient way of designing an imperceivable system is by adding

    a perceivable set of subsystems or interactions to a subsystem structure

    of a system and by testing each subsystem's structure for feasibility,

    before any subsystem contained in it is designed, by giving its subsys

    tem structure. The methodologies of system analysis and design parti

    tion a system into subsystems, specify subsystem properties and interac

    tions,  verify that all subsystems can be realized, construct system

    properties from subsystem structure, and compare them with specified

    system properties,

    2,4,

     System Analysis And Design Criteria

    The aim of system design is implementation of a functioning infor

    mation,

      handling facility. To do

     this,

      the design must be capable of

    translation into a set of coordinated procedures; in the case of

    information systems, this most frequently involves software and data

    design.

      When the application involves large volumes, and/or variety in

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    26/203

    14

    data structures, relationship and functions, problems of complexity

    beyond the ability of the software engineer to solve them can result.

    Several methods have been developed for defining and dealing with

    complexity in software design that can be adapted to the overall context

    of system design. Measures of software complexity include: modularity,

    coupling and cohesion; software metrics and diagrams are used to define

    and represent complex program structures,

    2,4,1.

     Measures of Complexity

    Several measures of complexity were developed based on module size

    in the 1970s,

    1, Halstead's Software Science is the most popular measure of

    complexity. The software science matrices derive four basic counts for

    a program: n1 is the number of distinct operators in a program, n2 is

    the number of distinct operands in a program, N1 is the total number of

    operators in a program, N2 is the total number of operands in a program.

    The length of a program (module) is N = N1 + N2,

    2,

     McCabe's Cyclomatic Number determines the complexity by counting

    the number of linearly independent paths through a program. In a

    structured program, the cyclomatic number is the number of comparisons

    in a module plus one (V(G) = Path - Node + 1 ) ,

    3, McClure's Control Variable Complexity computes the complexity

    as the sum of the number of comparisons in the module and the number of

    unique variables referenced in the comparisons.

    There are several factors that influence the control of complexity,

    such as modularity, coupling, and cohesion.

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    27/203

    15

    2.4.1.1. Modularity

    Dividing a program into modules can be a very effective way of

    controlling complexity. How a program is divided into modules is called

    its modularization scheme. The modularization scheme includes restrict

    ing module size to a certain number of instructions, such as IBM's 50

    lines of program code, Weinberg's 30 Programming instructions, or J.

    Martin's 100 instructions. However, in addition to module size guide

    lines,

      the logical and control constructs and variables also affect

    modularity.

    There are two basic types of modularity: problem-oriented and

    solution-oriented. A basic premise of Modular Programming (MP) is that

    a large piece of work can be divided into separate compilation units so

    as to limit the flow-of-complexity, permit parallel development by

    several programmers and cut recompilation costs. This premise generates

    wholly solution-oriented modules and relies on wholly solution-oriented

    criteria of modular division. At the other extreme is a program which

    responds to stimuli in a process-control application, and is wholly

    problem-oriented because it is structured to reflect the time-ordering

    of its input.

    MP is used as a crude device for the intolerable complexity of the

    unconstrained control logic of large programs. Two rules of thumb were

    found to be useful and were more fully exploited in later methodologies.

    One is the idea of functional separateness for constituent modules.

    This concept is refined and extended by both Myers and Constantine

    [64].  The second was the notion of a central control module directing

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    28/203

    16

    the processing carried out by all subordinate modules. MP is thought of

    as either the predetermined design or architectural design.

    2.4.1.2. Coupling

    Modules are connected by the control structure and by data. To

    control complexity the connections between modules must be controlled

    and minimized. Coupling measures the degree of independence between

    modules.

      The less dependence between modules, the less extensive the

    chain reaction that occurs because of a change in a module's logic. The

    more interaction between two modules, the tighter the coupling and the

    greater the complexity. It is affected by three factors: 1) the number

    of data items passed between modules, 2) the amount of control data

    passed between modules, 3) the number of global data elements shared by

    modules.

    In a system there are five types of coupling between two modules:

    data coupling, stamp coupling, control coupling, common coupling and

    content coupling. Data coupling is the loosest and the best type of

    coupling; content coupling is the tightest and the worst type of

    coupling.

    Decoupling is a method of making modules more independent. Decoup

    ling is best performed during system design. Each type of coupling

    suggests ways to decouple modules.

    2.4.1.3. Cohesion

    Cohesion measures how strongly the elements within a module are

    related. There are seven levels of cohesion, the order of the strongest

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    29/203

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    30/203

    18

    quality which are available early in the development process and produce

    quantitative evaluation of many critical structural attributes of large-

    scale systems. The most important practical test of a software metric

    is its validation on real software systems.

    The structure of information flow presents all of the possible

    paths of information flow in the subset of the procedures. The informa

    tion flow paths are easily computable from the relations which have been

    generated individually for each procedure. The current techniques of

    data flow analysis are sufficient to produce these relations automatica

    lly at compiler time. If the external specifications have been comple

    ted, then the information flow analysis may be performed at the design

    stage.

    The complexity of a given problem solution is not necessarily the

    same as the unmeasurable complexity of the problem being solved. The

    complexity of a procedure depends on two factors: the complexity of the

    procedure code and the complexity of the procedure's connections to its

    environment, A very simple length measure of a procedure was defined

    as the number of lines of text in the source code for the procedure.

    The (fan-in * fan-out) computes the total possible number of combina

    tions of an input source to an output destination. Here, fan-in of

    procedure A is the number of local flows into procedure A plus the

    number of data structures from which procedure A retrieves information.

    Fan-out of procedure A is the number of local flows from procedure A

    plus the number of data structures which procedure A updates.

    The global flows and the module complexities show four areas of

    potential design or implementation difficulties for the module.

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    31/203

    19

    1. Global flows indicate a poorly refined data structure. Redesign

    of the data structure to segment it into several pieces may be a solu

    tion to this overloading.

    2.

     The module complexities indicate improper modularization. It is

    desirable that a procedure be in one and only one module. This is

    particularly important when implementation languages are used which do

    not contain a module construct and violations of the module property are

    not enforcable at compile time,

    3. High global flows and a low or average module complexity

    indicate a third area of difficulty, namely, poor internal module

    construction,

    4. A low global flow and high module complexity may reveal either a

    poor functional decomposition within the module or a complicated

    interface with other modules.

    The formula defining the complexity value of a procedure is

    length * (fan-in * fan-out) ** 2

    The formula calculating the number of global flows is

    (write * read) + (write * read_write) +

    (read_write * read) + (read_write * (read_write - 1))

    The interface between modules is important because it allows the

    system components to be distinguished and also serves to connect the

    components of the system together, A design goal is to minimize the

    connections among the modules. Content coupling refers to a direct

    reference between the modules. This type of coupling is equivalent to

    the direct local flows. Common coupling refers to the sharing of a

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    32/203

    20

    global data structure which this is equivalent to the global flows

    measure.

    The connections between two modules is a function of the number of

    procedures involved in exporting and importing information between the

    modules,

      and the number of paths used to transmit this information, A

    simple way to measure the strength of the connections from module A to

    module B is

    (the number of procedures exporting information from module A

    + the number of procedures importing information into module B)

    * the number of information paths.

    The coupling measurements show the strength of the connections

    between two modules. Coupling also indicates a measure of modifiabili

    ty.  If modifications are made to a particular module, the coupling

    indicates which other modules are affected and how strongly the other

    modules are connected. These measurements are useful during the design

    phase of a system to indicate which modules communicate with which other

    modules and the strength of that communication. During implementation

    or maintenance, the coupling measurement is a tool to indicate what

    effect modifying a module will have on the other components of a system.

    2.4.2.2. Diagrams

    There are many types of diagrams: structured diagram, data flow

    diagram, structure charts, Warnier-Orr diagrams, Michael Jackson diag

    rams,

      HIPO diagrams, flowchart. Pseudocode, HOS charts, Nassi-Shneider-

    man charts, action diagrams, decision trees and decision tables, data

    analysis diagrams, entity-relationship diagrams, data navigation diag-

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    33/203

    21

    rams,

      and compound data accesses. Three species of functional decompo

    sition charts, IDEFO, IDEF1, and IDEF2 are usually associated with SDAT.

    Each system design and analysis methodology has some diagrams to

    represent the functional decomposition or data structure or data flow.

    Good, clear diagrams play an essential part in designing complex systems

    and developing programs. It also helps in maintenance and debugging.

    The larger the program, the greater the need for precision in diagram

    ming.

      MSR does not use any kind of diagram, it simply uses programming

    techniques at each level and refines one portion at a time until the

    whole program is completed. PSL/PSA also does not involve any diagram

    ming technique. It only uses literal description to express functional

    and performance requirements.

    Structured diagramming combines graphic and narrative notations to

    increase understandability. It can describe a system program at varying

    degrees of detail during each step of the functional decomposition

    process.  Flowcharts are not used because they do not give a structured

    view of a program. Today's structured techniques are an improvement

    over earlier techniques. However, most specifications for complex

    systems are full of ambiguities, inconsistencies, and omissions. More

    precisely, mathematically based techniques are evolving so that we can

    use the computer to help create specifications without these problems,

    such as CAI, CAD and CAM (Computer-Aided instruction, design, and

    manufacturing,  respectively),  CASA and CAP (Computer-Aided Systems

    Analysis and

      Programming).

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    34/203

    22

    2,4,3. Approaches to Successful System Design

    There are many design techniques which apply to both integrity and

    flexibility. A system has a high level of integrity if it has the

    characteristics: correctness, availability, survivability, auditability,

    and security.

    Flexibility includes the ability to modify functions, protocols, or

    interfaces provided by the system or to add new functions, protocols, or

    interfaces,

      and the ability to expand the capacity of the system, or

    possibly to decrease the capacity.

    Ground rules which apply both to designing for integrity and

    designing for flexibility include the following:

    1. Design for the maximum feasible modularity: The piece-at-a-time

    approach rather than the grand-design approach should be used throughout

    system design and implementation. Modularity improves integrity as well

    as flexibility. A modular system is more readily understood and is less

    likely than a more integrated system to have hidden relationships among

    its elements which eventually cause errors or failures. Modularity is

    therefore one of the most basic ground rules of good design.

    2.

     Design for simplicity and avoid complexity: Complexity in design

    or implementation makes a program or system difficult to understand,

    systems which cannot be understood cannot easily be changed. Simplicity,

    like modularity, contributes to both integrity and flexibility.

    3. Create a set of design and implementation standards within the

    organization, and observe them rigorously. Those standards include

    programming-language standards, protocols, interfaces, and internal

    interfaces.

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    35/203

    23

    4. Document everything.

    Designing rules for integrity are as follows:

    1. Provide automatic restart and recovery capabilities for all the

    system's information processors — host and satellite.

    2.

     Ensure that the communications links, processors, and other

    equipment have adequate diagnostic capabilities,

    3, Arrange for manual operation in case of serious system failures,

    4, Define which data elements or functions must be protected for

    security or privacy reasons and how this protection relates to

    the access control over system functions,

    5, Design the system to be self-tracking, by logging all important

    events,

    6. Ensure that the DBMS (DataBase Management System) software to be

    used includes dead-lock detection and resolution,

    7. Minimize the amount of keying required in data entry,

    8, Do not allow access to a database except by standard  DBMS.

    Designing rules for flexibility are as follows:

    1, Decouple information processing, database, and network design

    and implementation. Keeping each of these areas as independent

    as possible allows each to be changed without affecting any of

    the others. This is modularity on a systemwide basis and has

    the same effect as modularity at a lower level, such as within

    an application program.

    2.

     Decouple the design and management of the user interfaces, espe

    cially terminal-user interfaces, from processing which uses the

    input data.

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    36/203

    24

    3. Standardize and document all interfaces between modules and

    programs.

    4. Limit the size of programs and modules.

    5. Minimize the degree of tight integration among the parts of the

    information system; emphasize loose coupling instead.

    6. Don't define limits within modules, programs, or the system.

    2.4.4.

     System Characteristics and Design

    Components

    There are some characteristics which can be used to represent

    either a methodology or an application system.

    1. Data structures and clash recognition: Data structures are the

    vertical data relationships. Usually, input data structures are sepa

    rate from output data structures. Structure clash occurs when input

    data structures are inconsistent with output data structures. If there

    is a large amount of data involved, it is easy to have structure clash.

    If the methodologies are data-driven, then data structure charts will be

    included,

    2, Data flow analysis and control: Flow of data shows the horizon

    tal data relationship, or indicates the input-process-output relation

    ship.

      This relationship can be one to one  (1:1),  one to many  (1:N),

    many to one  (N:1),  or many to many  (N:M).  Problem Statement Language/

    Analyzer (PSL/PSA) generates data metrics, from the input data, which

    shows a many to many relationship. The input-process-output relation

    ship is many to many for the Structured Analysis and Design Technique/

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    37/203

    25

    IDEF

      (SADT/IDEF),

      Higher Order Software

      (HOS),

      and Hierarchy-Input-

    Process-Output

      (HIPO),

     and one to one for Structured Design (SD).

    3. Functional structures indicate the vertical functional relation

    ship.  They can be hierarchy charts or structure charts. Usually struc

    ture charts include more information than hierarchy charts.

    4. Process flow analysis is the horizontal functional relationship.

    It shows the sequence of processes. Process flow charts usually include

    input and output for each function (data flow analysis).  However, data

    flow analysis does not always include process flow analysis. Data flow

    analysis can be included in functional structure charts, such as HOS.

    5. Control mechanism: Some methodologies are good in logical

    control,

     but weak in data flow control, such as Warnier-Orr Design (WOD)

    and Top-Down Design  (TDD),  some are the reverse, while others are good

    in both controls.

    2.5. Criteria for Analysis and Design

    Methodologies

    Generally, a good design methodology and its diagrams should:

    1. include some or all of the following information:

    . process flow (horizontal functional relationship)

    . functional structures (vertical functional relationship)

    . data flow analysis (interface of data and functions)

    . data structures (vertical data relationship)

    2.

     provide formal guidelines to determine how to decompose struc

    tures, what the boundaries are, and when to stop;

    3. check the consistency of data input and output;

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    38/203

    26

    4. recognize structure clash;

    5. be precise in describing system structures;

    6. be able to handle complex systems;

    7.

     be easy to use;

    8. be easy to understand;

    9. be easy to modify;

    10. be easy to program and implement;

    11.

     provide documentation.

    The more information a diagram can show, the easier the prograrmner

    can write the code. A good diagram should be not only easy to read and

    understand, but also be consistent. Only when the flow of data is shown

    can data consistency be check.

    The flow of data is as important as the sequence of processes. If

    a diagram does not show the input and output data of a function process,

    there is a high probability of data inconsistency.

    2.6. Design Methodologies

    Design methodology may be defined as a set of principles which

    enables the building of an exact model of events and their associated

    data and the information to be derived from them. The design methodolo

    gy needs to construct a structure in terms of data entities, their

    dependences, relative orderings and associated data attributes in which

    internal consistency and completion are subject to verification by

    inspection, and implementation and optimization may be successively

    applied to it to transform it to executable form.

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    39/203

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    40/203

    28

    the interfaces between modules, and defining program quality metrics

    [141],

    SD is well suited to design problems where a well-defined data flow

    can be derived from the problem specifications. Some of the characte

    ristics that make a data flow well-defined are that input and output are

    clearly distinguished from each other, and that transformations of data

    are done in incremental steps — that is, single transformations do not

    produce major changes in the character of the data.

    The major problem with SD is that the design is developed bottom

    up,  which means that it can be applied to relatively small problems.

    Also,

      identifying input and output flow boundaries plays an important

    role in the definition of the modules and their relationships. However,

    the boundaries of the modules can be moved almost arbitrarily, leading

    to different system structures, but no formal guide is provided  [167],

    2,6.2, Meta Stepwise Refinement (MSR)

    Meta Stepwise Refinement (MSR) was authored by Henry Ledgard and

    later given its name by Ben Schneiderman. It is a synthesis of Mill's

    top-down notions, Wirth's stepwise refinement, and Dijkstra's level

    structuring. It produces a level-structured, tree-structured program.

    MSR allows the designer to assume a simple solution to a problem

    and gradually build in more and more detail until the complete, detailed

    solution is derived. Several refinements, all at the same level of

    detail,

      are conjured up by the designer each time additional detail is

    desired. The best of these is selected, and so on. Only the best solu

    tion is refined at each level of detail. The topmost level represents

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    41/203

    29

    the program in its most abstract form. In the bottommost level, program

    components can be easily described in terms of the programming language.

    MSR requires an exact, fixed problem definition. In early stages, it is

    programming language independent. The details are postponed to lower

    levels.

     Correctness is ensured at each level.

    Each level is a machine. The three components of a machine are: a

    data structure set, an instruction set, and an algorithm. To begin, the

    program is conceived as a dedicated virtual machine. During the refine

    ment process, a real machine is constructed from the virtual machine.

    At each step a new machine is built. As the process continues, the

    components of each successive machine become less abstract and more like

    instructions and data structures from the programming language. The

    process is complete when a machine can be built entirely of components

    available in the prograimning language.

    Since the solution at any one level depends on prior (higher)

    levels,

      and since any change in the problem statement affects prior

    levels,  the user's ability to produce a solution at any level is under

    mined until the changes are made. One approach is to refuse changes

    until the design is complete. This results in the solution and the

    requirements being unsynchronized. The production of multiple solutions

    is another difficulty. Coming up with fundamentally different solutions

    to a problem is not a likely occurrence for an individual. Also, how to

    decide which solution is best is not addressed by this method.

    This approach works best on small problems, perhaps those involving

    only a single module. It is particularly useful where the problem

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    42/203

    30

    specifications are fixed and an elegant solution is required, as in

    developing an executive for an operating system [141, 167].

    2.6.3. Warnier-Orr Design (WOD)

    Jean-Dominique Warnier and his group at Honeywell-Bull in Paris

    developed a methodology. Logical Construction of Programs/Systems (LCP/

    LCS), in the late 1950s, which is similar to Jackson's design methodolo

    gy in that it also assumes data structure is the key to successful soft

    ware design. However, this method is more proceduralized in its approach

    to program design than the Jackson method. In the mid 1970s, Ken Orr

    modified Warnier's LCP and created Structured Program Design  (SPD).  The

    hybrid form of LCP and SPD is the Warnier-Orr Design (WOD) methodology

    [141].

    WOD is a refinement of the basic Top-Down Design (TDD) approach. It

    has the basic input-process-output model for a system. It treats output

    as being more important than input which is different from the Yourdon-

    Constantine approach and the Jackson approach. The designer begins by

    defining the system output and works backwards through the basic system

    model to define the process and input parts of the design. It is not

    really a top-down design. The six steps in the Warnier-Orr design

    procedure are: define the process outputs, define the logical data base,

    perform event analysis, develop the physical data base, design the

    logical process, and design the physical process  [141].

    WOD is data-driven, driving the program structure from the data

    structure, just like the Jackson Methodology. They both stress that

    logical design should be separated from and precede physical design.

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    43/203

    31

    Structured programming provides only part of the theory which has

    resulted in the development of structured systems design. Another major

    element is data base management. The data base management system can

    provide a great deal of data independence and thereby reduce the cost

    of system maintenance.

    This method appears to be suited for small, output-oriented

    problems,

      and where the data are tree-structured, the latter leads to

    the same kind of problems as the Jackson Methodology

      [167].

     It has no GO

    TO structure. Network-like data structures are not permitted. It

    cannot be used as a general-purpose design methodology since many data

    bases are not hierarchically organized.

    Another problem is that the Warnier-Orr diagram includes control

    logic for loop termination and condition tests only as footnotes, rather

    than as an integral part of the diagram. It makes the designing of the

    program control structure a critical and difficult part of program

    design.

      There are no guidelines for control logic design. The methodo

    logy builds the logical data base to list all the input data items

    required to produce the desired program output much earlier, in step 2

    of the methodology. It provides no comparable structure to list control

    variables,

      nor are the new variables added to the logical data base.

    Further, it does not include a step to check that each control variable

    has been correctly initialized and reset  [141].

    The Warnier-Orr Diagram is a form of bracketed pseudocode where

    nesting is horizontal. For larger problems with several levels of

    nesting, the diagram quickly becomes many pages wide. Also, for larger

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    44/203

    32

    problems involving many pseudocode instructions, the diagram quickly

    becomes very crowded.

    At the point of defining the logical data base from the logical

    output structure and the logical input structure, the methodology

    becomes vague. By following from the output, the designer may overlook

    requirements. When oversights occur, the designer must redraw the data

    structure to include the hidden hierarchy. For very complex problems,

    many requirements may not be initially apparent from an examination of

    the system output. This may lead to an incorrect design.

    Another problem in multiple output structure is the incompatible

    hierarchy, which can be dropped from the structure. The process code is

    transferred to the next lower hierarchical level, and control logic is

    added in the lower level to determine when to execute the transferred

    logic.

    The ideal input structures which are derived from the problem

    output structure may not be compatibly ordered with the already existing

    physical input files. The user can always write another program to

    convert the existing input into the ideal input. Sometimes, the data

    may have to be completely restructured

      [141],

    2,6.4.

      Top-Down Design (TDD)

    Top-Down Design (TDD) is a design strategy that breaks large,

    complex problems into smaller, less complex problems, and then decompo

    ses each of those smaller problems into even smaller problems, until the

    original problem has been expressed as some combination of many small,

    solvable problems.

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    45/203

    33

    The interface between modules and the interface between subsystems

    are common places for bugs to occur. In the bottom-up approach, major

    interfaces usually are not tested until the very end — at which point,

    the discovery of an interface bug can be disastrous. By contrast, the

    top-down approach tends to force important, top-level interfaces to be

    exercised at an early stage in the project, so that if there are

    problems,

      they can be resolved while there still is the time, the

    energy, and the resources to deal with them. Indeed, as one goes

    further and further in the project, the bugs become simpler and simpler,

    and the interface problems become more and more localized.

    In the radical top-down approach, one first designs the top level

    of a system, then writes the code for those modules, and tests them as a

    version 1 system. Next, designs the second-level modules, those modules

    a level below the top-level modules just completed. Having designed the

    second-level modules, next write the code and test a version 2 system,

    and so forth.

    The conservative approach to top-down implementation consists of

    designing all the top-level modules, then the next level, then all

    third-level modules, and so on until the entire design is finished.

    Then the developer codes the top-level modules and implement them as a

    version 1. From the experience gained in implementing version 1, the

    developer makes any necessary changes to the lower levels of design,

    then codes and tests at the second level, and on down to the lowest

    level.

    There are an infinite number of compromise top-down strategies that

    one can select, depending on the situation. If the user has no idea of

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    46/203

    34

    what he wants, or has a tendency to change his mind, it opts for the

    radical approach. Committing oneself to code too early in the project

    may make it difficult to improve the design later. All other things

    being equal, the aim is to finish the entire design. If the deadline

    is inflexible, the radical approach should be employed, otherwise, the

    conservative approach. If one is required by his organization to

    provide accurate, detailed estimates of schedules, manpower, and other

    resources,

     then the conservative approach should be used.

    Many projects do seem to follow the extreme conservative approach,

    and while it may, in general, lead to better technical designs, it fails

    for two reasons: 1) on a large project, the user is incapable of speci

    fying the details with any accuracy, and 2) on a large project, users

    and top management increasingly are unwilling to accept two or three

    years of effort with no visible, tangible output. Hence, the movement

    toward the radical approach

      [115],

    In TDD, data should receive as much attention as functional compo

    nents because the interfaces between modules must be carefully specified

    specified. Refinement steps should be simple and explicit.

    TDD is Module Programming (MP) to which is added a rational or

    functional decomposition. The modules tend to be small and the require

    ment for references from one module to a lower-level module satisfies

    the single-entry, single-exit requirement of Structured Programming,

    TDD thus has the benefits of Structured Programming which MP does not

    [94],

    However, when following a TDD, common functions may not be recog

    nized, or the development process may require too much time, especially

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    47/203

    35

    for a large program. The TDD is preferred when the developer is more

    experienced in constructing a component to match a set of specifica

    tions,

      or when decisions concerning data representations must be

    delayed. A combination of TDD and bottom-up approach is often more

    practical.

    2.6.5. Michael Jackson Structured Design (MJSD)

    As developed by Michael Jackson, the methodology incorporates the

    technologies of top-down development, structured programming, and struc

    tured walk-throughs. It is a data-driven program design technique. The

    design process consists of four sequential steps: data step, program

    step, operations step, and text step.

    In this methodology a program is viewed as the means by which input

    data are transformed into output data. The system structure must

    parallel the data structures used. Thus a tree chart of the system

    organization reflects the data structure records. If not, then the

    design is incorrect. Paralleling the structure of the input and output

    ensures a quality design. Only serial files are involved. The methodo

    logy requires that the user knows how to structure data.

    MJSD divides programs into simple and complex programs. A complex

    program must first be divided into a sequence of simple programs, each

    of which can then be designed using the Basic Design Procedure. Most

    programs fail to meet the criteria for a simple program because their

    data structures conflict or because they do not process a complete file

    each time they are executed.

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    48/203

    36

    Breaking a complex problem into a set of simple programs and

    viewing a program as a sequence of simple programs connected by serial

    data streams simplifies the design but can cause serious efficiency

    problems when the design is implemented. Program inversion is the

    technique MJSD used to solve this problem. Program inversion allows a

    program to be designed to process a complete file but to be coded so

    that it can process one record at a time. It involves coding methods

    for eliminating physical files and introducing subroutines to read or

    write one record at a time. The structure text is not affected, nor is

    the design, nor is program inversion used during the design phase.

    However, how the designer arrives at a program structure that

    combines all simple programs is not explained in the methodology. For a

    complex problem, the system network diagram, the data structures, and

    the program structure do not obviously fit together. Double data struc

    tures,

      hidden program steps, and combined program structures, a normal

    part of the design of a complex program, can confuse and frustrate even

    the most experienced of designers.

    Several difficulties are encountered. One is the supporting docu

    mentation which has too few explanatory notes. Also, various file

    accessing and manipulation schemes make the design more difficult.

    Whether the data are tree-structured or not, one may still end up with

    an unimplementable program because there does not appear to be a causal

    link between data structure and program quality

      [167].

    There are two major achievements. The first, a standard solution

    to the backtracking problem, enables the correspondence of data struc

    tures and program structure to be maintained in those cases where the

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    49/203

    37

    serial input stream prohibits making decisions at the points where they

    are required. MJSD enables the essential distinction between a good

    batch and a bad batch to be preserved in the structure of the program.

    The second major achievemnet of MJSD is the recognition of

    structure clashes. Structure clashes occur when the input and output

    data structures conflict. A structure clash is usually recognized during

    the program step of the design process when the designer looks for

    correspondences between data structures. To resolve a structure clash,

    the problem is broken into two or more simple programs by expanding the

    system network diagram. The designer must back up and begin the design

    process again with an expanded system network diagram. As with back

    tracking the designer is able to apply standard, simple solutions to the

    complex problems.

    By defining programs and processes in a way which guarantees for

    each a structural independence of all the others, MJSD makes program

    design more nearly an automatic procedure than is the case with any of

    the other methodologies. By so doing it provides one of the prerequisi-

    ties of the ideal capability, but it has not yet capitalized on this in

    its system design techniques [94].

    Both MJSD and SD separate the implementation phase from the design

    phase.  The major difference between the two is that MJSD is based on

    the analysis of data structure, while SD is based on an analysis of

    data flow. One is data-oriented, the other process-oriented. MJSD

    advocates a static view of structures, whereas SD advocates a dynamic

    view of data flow.

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    50/203

    38

    The important difference between MJSD and SD is that in MJSD the

    need to read, update, and produce a new-subscriber master file is

    clearly shown. MJSD is therefore preferred over SD because it provides

    the more complete design.

    In the respect of data design, MJSD is superior to the other design

    methodologies. However, in the areas of control logic design and design

    verification, MJSD is weak. There are no guidelines for governing the

    execution of loops and selection structures during the last part of the

    last step in the design process or checking its correctness.

    Verification is an important part of the constructive method. Each

    step should be performed independently and verified independently. MJSD

    includes an informal verification step in each basic design step, which

    is not sufficient. The designer decomposes a complex problem into

    simple programs and verifies only the parts, but verifying the parts

    does not mean the whole is verified.

    The major weakness of MJSD is that it is not directly applicable to

    most real-world problems. First, the design process assumes the

    existence of a complete, correct specification. This is rarely possible

    for most data processing applications. Second, the design process is

    limited to simple programs. Third, the design process is oriented

    toward batch-processing systems. It is not an effective design techni-

    nique for on-line systems or database systems. In general, MJSD is more

    difficult to use than other structured design methodologies. The steps

    are tedious to apply.

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    51/203

    39

    2.6.6. Problem Statement Language/Analyzer

    (PSL/PSA)

    PSL (Problem Statement Language) was developed by the ISDOS project

    at the University of Michigan. It is a natural language prescribing no

    particular methodology or procedure, though it contains a set of

    declarations that allow the user to define objects in the proposed

    system, to define properties that each object possesses, and to connect

    the objects via relationships. Each PSL description is a combination of

    formal statements and text annotation.

    PSA (Problem Statement Analyzer) is the implemented processor that

    validates PSL statements. It generates a data base that describes the

    system's requirements and performs consistency checks and completness

    analyses on the data. Many different reports can be generated by the

    PSA processor. These include: data base accesses and changes, errors,

    lists of objects, and relationships among the data in the data base.

    PSL is mostly keyword-oriented, with a relatively simple syntax [83].

    PSL contains a number of types of objects and relationships which

    permit these different aspects to be described: system input/output

    flow, system structure, data structure, data derivation, system size and

    volume,

     system dynamics, system property, and project management.

    PSL/PSA incorporates three important concepts. First, all informa

    tion about the developing system is to be kept in a computerized,

    development-information database. Second, processing of this informa

    tion is to be done with the aid of the computer to the extent possible.

    Third, specifications are to be given in what-terms, not how-terms.

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    52/203

    40

    The approach primarily assists in the communication aspect of

    requirements analysis and specification. It does not incorporate any

    specific method to lead one to a better understanding of the problem

    being worked on. The act of formally specifying all development infor

    mation and the cuialysis reports produced by PSA can aid the analyst's

    understanding of the problem. It was created at least partly in

    response to the tendency of analysts/designers to describe a system in

    programming-level details that are useless for communication with the

    nontechnical user. At the same time, it provides sufficient rigor to

    make it useful for design specification. It has been used in situations

    ranging from commercial DP to air-defense systems [81] .

    PSL/PSA is too general for many specific applications. Naming

    conventions and limited attributes are not checked by PSA and must be

    used to keep the design manageable. Also, PSA uses large amounts

    of canputer memory. In the and several megabytes of direct access

    secondary storage on large main frame computers. However, since I98O a

    microcomputer version of PSL/PSA has been developed with a minimum of

    64K bytes of RAM, and it became a single-user system. The number of

    reports also reduced from 30 to 6 [83, 122].

    PSL/PSA needs more precise statements about logical and procedural

    information. It is also too complicated to be used. It should provide

    more effective and simple data entry and modification commands and

    provids more help to the users. The system performance also needs

    improvement.

    PSL/PSA has a number of benefits. Simple and complex analyses

    become possible and particular qualities, such as completeness, consis-

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    53/203

    41

    tency and coherence can be more easily ensured and checked for. Also,

    the implications of changes can be quickly and completely established.

    2.6.7.

     Structured Analysis and Design Technique

    (SADT)

    Structured Analysis and Design Technique

      (SADT),

      a trademark of

    SofTech, Inc., is a manual graphical system for system analysis and

    design.

      SADT has three basic principles. The first is the top-down

    principle in a very pragmatic form. Any function or data aggregate be

    decomposed into no fewer than three, no more than six at the next level,

    because to decanpose into fewer generates too many levels and to deccan-

    pose into more causes span of control problems.

    The second principle is represented by the rectangular transforma

    tion box with its four symmetrically disposed arrows. If the box repre

    sents an activity, the left arrow describes the input data, the right

    arrow the output data, the upper arrow the control data and the lower

    arrow the means used to effect the transformation. This is a formaliza

    tion of the change-of-state emphasis, but is decoupled entirely from any

    computer representation.

    The third principle is that the system may be represented equally

    by a connected set of data boxes. The activity boxes are connected by

    links representing data while the data boxes are connected by links

    representing activities. The methodology acknowledges that the activity

    and data representations are symmetric and equally valid, but since a

    system has to be implemented from its constituent activities the

    activity-based approach is the dominant one. In fact, the drawing of

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    54/203

    42

    the corresponding database diagrams is recommended mainly as a means

    of checking consistency and completeness. There is an interesting

    parallel here between the activity/data ambivalence of SADT and the

    transform/transaction ambivalence of SD [94 ].

    One basic assumption made in using SADT is that requirements defi

    nition ultimately requires user input to choose among conflicting goals

    and_cons_traints. This is difficult or impossible to automate. While

    SADT is a design technique, it assumes that existence of other automated

    tools to provide for the on-line data bases that may be needed.

    There are a few problems in using SADT: it is not automated; it is

    sometimes hard to think only in functional terms; the technique lacks a

    specific design methodology [8 3] .

    The United Stated Air Force Integrated Computer Aided Manufacturing

    (ICAM) Program is a SADT, particularily for applying computer technologjy

    to manufacturing and demonstrating ICAM technology for transition to

    industry. To better facilitate this approach the ICAM Definiton (IDEF)

    method was established to better understand, communicate and analyze

    manufacturing.

    There are three models in IDEF. Each represents a distinct but

    related view of a system and each provides for a better understanding of

    that system.

    1. IDEFO is used to produce a Function Model  (blueprint).  These

    functions are collectively referred to as decision, actions and activi

    ties.

      In order to distinguish between functions, the model is required

    to identify what objects are input to the function, output from the

    function and what objects control the functions. Objects in this model

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    55/203

    43

    refer to data, facilities, equipement, resources, material, people,

    organizations, information, etc.

    2.

     IDEF1 is an Information Model. The IDEF1 is a dictionary, a

    structured description supported by a glossary which defines, cross-

    references,

      relates and characterizes information at a desired level of

    detail necessary to support the manufacturing environment. The IDEF1

    identifies entities, relations, attributes, attribute domains, and

    attribute assignment constraints. The advantage of the IDEF1 approach

    to describing manufacturing is that it provides an essentially invariant

    structure around which data bases and application subsystems can be

    designed to handle the constantly changing requirements of manufacturing

    information.

    3. IDEF2 is a Dynamics Model

      (scenario).

      It represents the time

    dependent characteristics of manufacturing to describe and analyze the

    behavior of functions and information interacting over time. The Dynamic

    Model identifies activation and termination events, describes sequences

    of operations, and defines conditional relations between input eind

    output.

      The Dynamics Model Entity Flow diagram is supported by

    definition forms which quantify the times associated with the diagram.

    The Dynamics Model is comprised of four submodels: Resource Disposition,

    System Control, and Facility Submodels which support the Entity Flow

    Submodel.

    The ICAM system development methodology is unique because it estab

    lishes a formal definition of the current manufacturing system prior to

    the specification of the future integrated system and it uses a model

    rather than a specification to accomplish this definition. Descriptive

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    56/203

    44

    models for management control and representational models for specifica

    tions and designs are utilized to construct, implement and maintain an

    integrated system  [200].

    2.6.8. Higher Order Software (HOS)

    HOS initially was developed and promoted by Margaret Hamilton and

    Saydean Zeldin while working on NASA projects at MIT. The method was

    invented in response to the need for a formal means of defining

    reliable, large scale, multiprogrammed, multiprocessor systems. Its

    basic elements include a set of formal laws, a specification language,

    an automated analysis of the system interfaces, layers of system archi

    tecture produced fran the analyzer output, transparent hardware.

    This design method requires rigorously defined forms of functional

    decomposition with mathematical rules at each step. The decomposition

    continues until blocks are reached from which executable program code

    can be generated. The technique is implemented with a software graphics

    tool called USE.IT, which automatically generates executable program

    code and tenainates the decomposition. It can generate code for very

    complex systems with complex logic.

    This design method is based on axioms which explicitly define a

    hierarchy of software control, wherein control is a formally specified

    effect of one software object on another:

    . A given module controls the invocation of the set of valid func

    tions on its immediate, and only its immediate, lower level.

    . A given module is responsible for elements of only its own

    output space.

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    57/203

    45

    . A given module controls the access rights to a set of variables

    whose values define the elements of the output space for each,

    and only each, immediate lower level function.

    . A given module controls the access rights to a set of variables

    whose values define the elements of the input space for each, and

    only each, immediate lower level function.

    . A given module can reject invalid elements of its own, and only

    its own, input set.

    . A given module controls the ordering of each tree for the imme

    diate, and only the immediate, lower levels.

    The most primitive form of HOS is a binary tree. Each decompostion

    is of a specified type and is called a control structure. The decompo

    sition has to obey rules for that control structure that enforces

    mathematically correct decomposition.

    Each node of HOS binary tree represents a function. A function has

    one or more objects as its input and one or more objects as its output.

    An object might be a data item, a list, a table, a report, a file, or a

    data base, or it might be a physical entity. In keeping with mathemati

    cal notation, the input object or objects are written on the right-hand

    side of the function, and the output object or objects are written on

    the left-hand side of the function.

    Design may proceed in a top-down or bottom-up fashion. In many

    methodologies, the requirements statements, the specifications, the

    high-level design, and the detailed program design are done with

    different languages. With HOS, one language is used for all of these.

  • 8/17/2019 THE METHODOLOGIES OF SYSTEM ANALYSIS AND DESIGN, December, 1986

    58/203

    46

    Automatic checks for errors, omissions, and inconsistencies are applied

    at each stage.

    Three primitive control structures are used: JOIN, INCLUDE, and OR.

    Other control structures can be defined as combinations of these three.

    1. JOIN: The parent's function is decomposed into two sequential

    functions. The input of the right-hand child is the same as

    that of the parent. The output of the parent is the same as t