Cs2301 Se 16marks

43
www.nskinfo.com- College Student and Faculty Website Department Of nskinfo i-education CS2301-Software Engineering-16MARKS www.nsksofttech.com- AnnaUniversity Chennai all department soft copy question paper& More than 500 c&c++ programs coming soon…………………….

description

important 16 mark ques

Transcript of Cs2301 Se 16marks

Page 1: Cs2301 Se 16marks

www.nskinfo.com- College Student and Faculty Website

Department Of nskinfo i-education

CS2301-Software Engineering-16MARKS

www.nsksofttech.com- AnnaUniversity Chennai all department soft

copy question paper& More than 500 c&c++ programs coming

soon…………………….

Page 2: Cs2301 Se 16marks

1

CS2301 SOFTWARE ENGINNERING

IMPORTANT SIXTEEN MARKS QUESTIONS

UNIT – I – SOFTWARE PRODUCT AND PROCESS

Introduction – S/W Engineering Paradigm – Verification – Validation – Life

Cycle Models – System Engineering – Computer Based System – Business

Process Engineering Overview – Product Engineering Overview.

1. Explain the Water Fall Software Process Model with neat diagram.

The waterfall model is the classical model of software engineering. This

model is one of the oldest models and is widely used in government projects and in

many major companies. As this model emphasizes planning in early stages, it

ensures design flaws before they develop. In addition, its intensive document and

planning make it work well for projects in which quality control is a major

concern. It is used when requirements are well understood in the beginning and

also called as classic life cycle. A systematic, sequential approach to Software

development begins with customer specification of Requirements and progresses

through planning, modeling, construction and deployment. This Model suggests a

systematic, sequential approach to software development that begins at the system

level and progresses through analysis, design, code and testing. The waterfall

method does not prohibit returning to an earlier phase, for example, returning from

the design phase to the requirements phase. However, this involves costly rework.

Each completed phase requires formal review and extensive documentation

development. Thus, oversights made in the requirements phase are expensive to

correct later.

Advantages:

1. Easy to understand and implement.

2. Widely used and known

3. Reinforces good habits: define-before- design, design-before-code.

4. Works well on mature products and weak teams.

Disadvantages:

Page 3: Cs2301 Se 16marks

2

1. Idealized doesn’t match reality well.

2. Unrealistic to expect accurate requirements so early in project.

3. Difficult to integrate risk management.

2. Explain the Spiral S/W Process Model with neat diagram.

The spiral model is similar to the incremental model, with more emphases

placed on risk analysis. The spiral model has four phases: Planning, Risk Analysis,

Engineering and Evaluation. A software project repeatedly passes through these

phases in iterations. The baseline spiral, starting in the planning phase,

requirements is gathered and risk is assessed. Each subsequent spiral builds on the

baseline spiral. Requirements are gathered during the planning phase. In the risk

analysis phase, a process is undertaken to identify risk and alternate solutions. A

prototype is produced at the end of the risk analysis phase. Software is produced in

the engineering phase, along with testing at the end of the phase. The evaluation

phase allows the customer to evaluate the output of the project to date before the

project continues to the next spiral. In the spiral model, the angular component

represents progress, and the radius of the spiral represents cost.

The task regions of Spiral Model are,

Customer communication:

Tasks required are establishing effective communication between developer and

planning.

Estimation

Scheduling

Risk analysis

Page 4: Cs2301 Se 16marks

3

Risk analysis:

Analysis and Design

Construct and release:

Code and Test

Customer evaluation:

Delivery and Feedback

Advantages:

1. High amount of risk analysis.

2. Good for large and mission-critical projects.

3. Software is produced early in the software life cycle.

Disadvantages:

1. Can be a costly model to use.

2. Risk analysis requires highly specific expertise.

3. Project’s success is highly dependent on the risk analysis phase.

4. Doesn’t work well for smaller projects.

3. Explain System Engineering Process in detail.

System engineering process follows a waterfall model for the parallel

development of different parts of the system. Specifying, designing, implementing,

validating, deploying and maintaining socio-technical systems. Concerned with the

services provided by the system, constraints on its construction and operation and the

ways in which it is used. Usually follows a ‘waterfall’ model because of the need for

parallel development of different parts of the system. There is a little scope for

iteration between phases because hardware changes are very expensive. Software

may have to compensate for hardware problems. Inevitably involves engineers from

different disciplines who must work together. Also there is a much scope for

misunderstanding. Different disciplines use a different vocabulary and much

negotiation is required. Engineers may have personal agendas to fulfil.

Page 5: Cs2301 Se 16marks

4

SYSTEM REQUIREMENTS DEFINITION:

Three types of requirement defined at this stage

Abstract functional requirements: System functions are defined in an

abstract way;

System properties: Non-functional requirements for the system in general are

defined;

Undesirable characteristics: Unacceptable system behaviour is specified.

THE SYSTEM DESIGN PROCESS:

Process steps:

Partition requirements:

Organise requirements into related groups.

Identify sub-systems:

Identify a set of sub-systems which collectively can meet the system

requirements.

Assign requirements to sub-systems:

Causes particular problems when COTS are integrated.

Specify sub-system functionality

Define sub-system interfaces:

Page 6: Cs2301 Se 16marks

5

Critical activity for parallel sub-system development.

SUB-SYSTEM DEVELOPMENT PROCESS:

Typically parallel projects developing the hardware, software and

communications may involve some COTS (Commercial Off-the-Shelf) systems

procurement. This involves Lack of communication across implementation teams.

Bureaucratic and slow mechanism for proposing system changes means that the

development schedule may be extended because of the need for rework.

SYSTEM INTEGRATION:

The process of putting hardware, software and people together to make a system

is called System Integration. This should be tackled incrementally so that sub-systems

are integrated one at a time. Interface problems between sub-systems are usually

found at this stage and may be problems arises with uncoordinated deliveries of

system components.

SYSTEM INSTALLATION:

System Installation Issues are,

Environmental assumptions may be incorrect.

There may be human resistance to the introduction of a new system.

System may have to coexist with alternative systems for some period.

There may arise some physical installation problems (e.g. cabling problem).

Operator training has to be identified.

SYSTEM EVOLUTION:

The lifetime of large systems is too long. They must evolve to meet change

requirements. The evolution may be costly. Existing systems that must be

maintained are sometimes called as legacy systems.

SYSTEM DECOMMISSIONING:

Taking the system out of service after its useful lifetime is called as System

Decommissioning.

4. Write short notes on Business Process Engineering overview and

Product Engineering overview?

BUSINESS PROCESS ENGINEERING:

“Business process” engineering defines architectures that will enable a

business to use information effectively. It involves the specification of the

appropriate computing architecture and the development of the software

architecture for the organization's computing resources. There are three different

architectures must be analyzed and designed within the context of business

objectives and goals,

Page 7: Cs2301 Se 16marks

6

The data architecture provides a framework for the information needs of a

business (e.g., ERD)

The application architecture encompasses those elements of a system that

transform objects within the data architecture for some business purpose

The technology infrastructure provides the foundation for the data and

application architectures. It includes the hardware and software that are

used to support the applications and data

PRODUCT ENGINEERING:

Product engineering translates the customer's desire for a set of defined

capabilities into a working product. It achieves this goal by establishing product

architecture and a support infrastructure. Product architecture components consist

of people, hardware, software, and data. Support infrastructure includes the

technology required to tie the components together and the information to support

the components.

Requirements engineering elicits the requirements from the customer and

allocates function and behavior to each of the four components.

System component engineering happens next as a set of concurrent

activities that address each of the components separately. Each component

takes a domain-specific view but maintains communication with the other

domains. The actual activities of the engineering discipline take on an

element view.

Analysis modeling allocates requirements into function, data, and behavior.

Page 8: Cs2301 Se 16marks

7

Design modeling maps the analysis model into data/class, architectural,

interface, and component design.

UNIT – II – SOFTWARE REQUIREMENTS

Functional and Non-Functional – Software Document – Requirement

Engineering Process – Feasibility Studies – Software Prototyping –

Prototyping in the Software Process – Data – Functional and Behavioral

Models – Structured Analysis and Data Dictionary.

1. What is Software Prototyping? Explain the prototyping approaches in

software process?

SOFTWARE PROTOTYPING:

Prototyping is the rapid development of a system. The principal use is to help

customers and developers understand the requirements for the system

Requirements elicitation – Users can experiment with a prototype to see

how the system supports their work

Page 9: Cs2301 Se 16marks

8

Requirements validation – The prototype can reveal errors and omissions

in the requirements.

Prototyping can be considered as a risk reduction activity.

PROTOTYPING APPROACHES IN SOFTWARE PROCESS:

Evolutionary

prototyping

Throw-away

Prototyping

Delivered

system

Executable Prototype +System Specification

OutlineRequirements

There are Two Approaches,

Evolutionary prototyping - In this approach of system development, the initial

prototype is prepared and it is then refined through number of stages to final stage.

Throw-away prototyping - Using this approach a rough practical implementation of

the system is produced. The requirement problems can be identified from this

implementation. It is then discarded. System is then developed using some different

engineering paradigm.

EVOLUTIONARY PROTOTYPING:

Objective:

The principal objective of this model is to deliver the working system to the

end-user. Example: AI systems. Based on techniques which allow rapid system

iterations Verification is impossible as there is no specification. Validation means

demonstrating the adequacy of the system.

The Specification, design and implementation are inter-twined. The system is

developed as a series of increments that are delivered to the customer. Techniques

for rapid system development are used such as CASE tools and 4GLs. User

interfaces are usually developed using a GUI development toolkit.

Page 10: Cs2301 Se 16marks

9

Advantages:

Fast delivery of the working system

User is involved while developing the system

More useful system can be delivered.

Problems:

Management problems

Maintenance problem

Verification

THROW-AWAY PROTOTYPING:

Objective:

The principal objective of this model is to validate or to derive the system

requirements. It is developed to reduce requirement risks.

Outline

requirements

Develop

prototype

Evaluate

prototype

Specify

system

Developsoftware

Validatesystem

Deliveredsoftwaresystem

Reusablecomponents

The prototype is developed from an initial specification, delivered for

experiment then discarded.

Advantage:

Requirement risks are very less.

Problems:

It can be undocumented.

Changes made during the software development proceed may degrade the

system structure.

Sometimes organizational quality standard may not be strictly applied.

INCREMENTAL DEVELOPMENT:

System is developed and delivered in increments after establishing an overall

architecture. Requirements and specifications for each increment may be

developed. Users may experiment with delivered increments while others are being

developed.

Page 11: Cs2301 Se 16marks

10

Validateincrement

Build system

incrementSpecify system

incrementDesign system

architecture

Define system

deliverables

Systemcomplete?

Integrateincrement

Validatesystem

Deliver finalsystem

YES

NO

Therefore, these serve as a form of prototype system. The main intension is to

combine some of the advantages of prototyping but with a more manageable

process and better system structure.

2. With suitable examples and required diagrammatic representation

explain the Functional Modeling in detail.

All functional models really do is describe the computational structure of the

system. On the other hand, this system – even though it may have many Use

Cases should only have one functional model, yet this may be composed of many

functional diagrams. Furthermore, the activity of creating a functional model is

commonly known as functional modeling. A functional model describes how the

system is changing. This can be best viewed with an Activity Diagram which

shows how we change from one state to another depends on what actions are

being performed or the overall state of the system. This leads me to the key

feature (subject) of a functional model Functions and Flows. It can be representing

as with the help of functional diagrams. For example:

Activity Diagrams

Use Case Diagrams

Context Model

ACTIVITY DIAGRAM:

The purpose of an activity diagram is to represent data and activity flows in an

application. The components are, within an activity diagram there are many key

modeling concepts, here is a select main few of them:

An activity represents an action or a set of actions to be taken.

This is a control flow. It shows the sequence of execution.

This is the initial start node – the beginning of the set of actions (the start

basically)

Page 12: Cs2301 Se 16marks

11

This is the Final node. It stops all flow in an activity diagram.

This is a decision node. It represents a test condition much like an IF

statement.

USE CASE DIAGRAM:

They are diagrams to help aid the creation, visualization and documentation of

various aspects of the software engineering process.

Use Cases come in pairs, so we have

Use Case Diagram: an overview of the system

Use Case Description: details about each function

An Actor is something that performs use cases upon a system. An Actor is just

an entity, meaning it can be a Human or other artifacts that directly play an

external role in the system – as long as they either directly use the system or is

used by the system. For each Use Case we have to know entry conditions

(preconditions) and Exit conditions (post-conditions), so basically “What is true

before the Use Case” and “What is true after the Use Case”.

CONTEXT MODEL:

Context models are used to illustrate the boundaries of a system. Social and

organisational concerns may affect the decision on where to position system

boundaries. Architectural models show the system and its relationship with other

systems.

Example for Context Model is the ATM Systems,

Auto-tellersystem

Securitysystem

Maintenancesystem

Accountdatabase

Usagedatabase

Branchaccounting

system

Branchcountersystem

Page 13: Cs2301 Se 16marks

12

3. With suitable examples and required diagrammatic representation

explain the Behavioral Modeling in detail.

Behavioural models are used to describe the overall behaviour of a system. Two

types of behavioural model are,

Data processing models that show how data is processed as it moves through

the system

State machine models that show the systems response to events

Both of these models are required for a description of the system’s behaviour.

DATA PROCESSING MODELS:

Data flow diagrams are used to model the system’s data processing. These

show the processing steps as data flows through a system. Intrinsic part of many

analysis methods. Simple and intuitive notation by which the customers can

understand and it shows end-to-end processing of data.

DATA FLOW DIAGRAMS:

DFDs model the system from a functional perspective. Tracking and

documenting how the data associated with a process is helpful to develop an

overall understanding of the system. Data flow diagrams may also be used in

showing the data exchange between a system and other systems in its

environment.

Completeorder form

Orderdetails +

blank

order form

Valida teorder

Recordorder

Send tosupplier

Adjustavailablebudget

Budgetfile

Ordersfile

Completedorder form

Signedorder form

Signedorder form

Checked andsigned order

+ ordernotification

Orderamount

+ accountdetails

Signedorder form

Orderdetails

STATE MACHINE MODELS:

These model the behaviour of the system in response to external and internal

events. They show the system’s responses to stimuli so are often used for modelling

real-time systems. State machine models show system states as nodes and events as

arcs between these nodes. When an event occurs, the system moves from one state to

another. State charts are an integral part of the UML.

Page 14: Cs2301 Se 16marks

13

Example for State Machine Model is the Microwave Oven Model,

Full power

Enabled

do: operateoven

Fullpower

Halfpower

Halfpower

Fullpower

Number

TimerDooropen

Doorclosed

Doorclosed

Dooropen

Start

do: set power = 600

Half power

do: set power = 300

Set time

do: get numberexit: set time

Disabled

Operation

Timer

Cancel

Waiting

do: display time

Waiting

do: display time

do: display 'Ready'

do: display 'Waiting'

4. Explain the various functionalities accomplished in the Requirement

Engineering Process in detail.

OBJECTIVES:

To describe the principal requirements engineering activities and their

relationships

To introduce techniques for requirements elicitation and analysis

To describe requirements validation and the role of requirements reviews

To discuss the role of requirements management in support of other

requirements engineering processes

REQUIREMENTS ENGINEERING PROCESS:

The processes used for RE vary widely depending on the application domain,

the people involved and the organisation developing the requirements. There are a

number of generic activities common to all processes.

Requirements Elicitation;

Requirements Analysis;

Requirements Validation;

Requirements Management.

Page 15: Cs2301 Se 16marks

14

FEASIBILITY STUDIES:

A feasibility study decides whether or not the proposed system is worthwhile.

A short focused study that checks. If the system contributes to organisational

objectives; if the system can be engineered using current technology and within

budget; if the system can be integrated with other systems that are used.

ELICITATION AND ANALYSIS:

Sometimes called requirements elicitation or requirements discovery. Involves

technical staff working with customers to find out about the application domain,

the services that the system should provide and the system’s operational

constraints. May involve end-users, managers, engineers involved in maintenance,

domain experts, trade unions, etc. These are called stakeholders.

REQUIREMENTS VALIDATION:

Concerned with demonstrating that the requirements define the system that the

customer really wants. Requirements error costs are high so validation is very

important. Fixing a requirements error after delivery may cost up to 100 times the

cost of fixing an implementation error.

Requirements Validation Techniques:

Requirements reviews: Systematic manual analysis of the requirements.

Prototyping: Using an executable model of the system to check requirements.

Test-case generation: Developing tests for requirements to check testability.

REQUIREMENTS MANAGEMENT:

Requirements management is the process of managing changing requirements

during the requirements engineering process and system development.

Requirements are inevitably incomplete and inconsistent New requirements

emerge during the process as business needs change and a better understanding of

Page 16: Cs2301 Se 16marks

15

the system is developed; Different viewpoints have different requirements and

these are often contradictory.

During the requirements engineering process plan,

Requirements identification: How requirements are individually identified;

A change management process: The process followed when analysing a

requirements change;

Traceability policies: The amount of information about requirements

relationships that is maintained;

CASE tool support: The tool support required to help manage requirements

change;

Traceability:

Traceability is concerned with the relationships between requirements, their

sources and the system design

Source traceability: Links from requirements to stakeholders who proposed

these requirements;

Requirements traceability: Links between dependent requirements;

Design traceability: Links from the requirements to the design;

Case Tool Support:

Requirements storage

Requirements should be managed in a secure, managed data store.

Change management

The process of change management is a workflow process whose stages can

be defined and information flow between these stages partially automated.

Change Management:

Should apply to all proposed changes to the requirements.

Principal stages

Problem analysis. Discuss requirements problem and propose change;

Change analysis and costing. Assess effects of change on other requirements;

Change implementation modifies requirements document and other documents to

reflect change.

Page 17: Cs2301 Se 16marks

16

UNIT – III – ANALYSIS, DESIGN CONCEPTS AND PRINCIPLES

Systems Engineering - Analysis Concepts - Design Process And Concepts –

Modular Design – Design Heuristic – Architectural Design – Data Design – User

Interface Design – Real Time Software Design – System Design – Real Time

Executives – Data Acquisition System – Monitoring And Control System.

1. Explain the Modular Design with necessary diagrams.

EFFECTIVE MODULAR DESIGN:

Modularity has become an accepted approach in all engineering disciplines. A

modular design reduces complexity, facilitates change, and results in easier

implementation by encouraging parallel development of different parts of a system.

FUNCTIONAL INDEPENDENCE:

The concept of functional independence is a direct outgrowth of modularity

and the concepts of abstraction and information hiding. Functional independence

is achieved by developing modules with "single-minded" function and an

"aversion" to excessive interaction with other modules. Stated another way, we

want to design software so that each module addresses a specific sub function of

requirements and has a simple interface when viewed from other parts of the

program structure. It is fair to ask why independence is important. Software with

effective modularity, that is, independent modules, is easier to develop because

function may be compartmentalized and interfaces are simplified. Independent

modules are easier to maintain because secondary effects caused by design or code

modification are limited, error propagation is reduced, and reusable modules are

possible. To summarize, functional independence is a key to good design, and

design is the key to software quality. Independence is measured using two

qualitative criteria: cohesion and coupling. Cohesion is a measure of the relative

functional strength of a module. Coupling is a measure of the relative

interdependence among modules.

COHESION:

Cohesion is a natural extension of the information hiding concept. A cohesive

module performs a single task within a software procedure, requiring little

interaction with procedures being performed in other parts of a program. Stated

simply, a cohesive module should do just one thing. Cohesion may be represented

as a "spectrum." We always strive for high cohesion, although the mid-range of the

spectrum is often acceptable. Low-end cohesiveness is much "worse" than middle

range, which is nearly as "good" as high-end cohesion. In practice, a designer need

not be concerned with categorizing cohesion in a specific module. Rather, the

overall concept should be understood and low levels of cohesion should be avoided

when modules are designed. At the low end of the spectrum, we encounter a

module that performs a set of tasks that relate to each other loosely, if at all. Such

modules are termed coincidentally cohesive. A module that performs tasks that are

related logically is logically cohesive. e.g., one module may read all kinds of input.

When a module contains tasks that are related by the fact that all must be executed

Page 18: Cs2301 Se 16marks

17

with the same span of time, the module exhibits temporal cohesion. As an

example of low cohesion, consider a module that performs error processing for an

engineering analysis package. The module is called when computed data exceed

pre-specified bounds. It performs the following tasks: (1) computes supplementary

data based on original computed data, (2) produces an error report on the user's

workstation, (3) performs follow-up calculations requested by the user, (4) updates

a database, and (5) enables menu selection for subsequent processing. Although the

preceding tasks are loosely related, each is an independent functional entity that

might best be performed as a separate module. Combining the functions into a

single module can serve only to increase the likelihood of error propagation when a

modification is made to one of its processing tasks. Moderate levels of cohesion

are relatively close to one another in the degree of module independence. When

processing elements of a module are related and must be executed in a specific

order, procedural cohesion exists. When all processing elements concentrate on

one area of a data structure, communicational cohesion is present. High cohesion

is characterized by a module that performs one distinct procedural task.

COUPLING:

Coupling is a measure of interconnection among modules in a software

structure. Coupling depends on the interface complexity between modules, the

point at which entry or reference is made to a module, and what data pass across

the interface. In software design, we strive for lowest possible coupling. Simple

connectivity among modules results in software that is easier to understand and

less prone to a "ripple effect", caused when errors occur at one location and

propagates through a system.

Modules a and d are subordinate to different modules. Each is unrelated and

therefore no direct coupling occurs. Module c is subordinate to module a and is

accessed via a conventional argument list, through which data are passed. As long

as a simple argument list is present, low coupling is exhibited in this portion of

structure. A variation of data coupling, called stamp coupling is found when a

portion of a data structure is passed via a module interface. This occurs between

Page 19: Cs2301 Se 16marks

18

modules b and a. At moderate levels, coupling is characterized by passage of

control between modules. Control coupling is very common in most software

designs and is shown in Figure 1 where a “control flag” is passed between modules

d and e. High coupling occurs when a number of modules reference a global data

area Common coupling. Modules c, g, and k each access a data item in a global

data area. Module c initializes the item. Later module g recomputes and updates the

item. Let's assume that an error occurs and g updates the item incorrectly. Much

later in processing module, k reads the item, attempts to process it, and fails,

causing the software to abort. The apparent cause of abort is module k; the actual

cause, module g. Diagnosing problems in structures with considerable common

coupling is time consuming and difficult. However, this does not mean that the use

of global data is necessarily "bad." It does mean that a software designer must be

aware of potential consequences of common coupling and take special care to

guard against them. The highest degree of coupling, content coupling, occurs when

one module makes use of data or control information maintained within the

boundary of another module. Secondarily, content coupling occurs when branches

are made into the middle of a module. This mode of coupling can and should be

avoided.

2. List out Design Heuristics for effective Modular Design

Evaluate the first iteration of the program structure to reduce coupling and

improve cohesion.

Attempt to minimize structures with high fan-out; strive for fan-in as structure

depth increases.

Keep the scope of effect of a module within the scope of control for that

module.

Evaluate module interfaces to reduce complexity, reduce redundancy, and

improve consistency.

Define modules whose function is predictable and not overly restrictive (e.g. a

module that only implements a single sub function).

Strive for controlled entry modules, avoid pathological connection (e.g.

branches into the middle of another module)

Modular Design Evaluation Criteria:

Modular Decomposability - provides systematic means for breaking problem

into sub problems

Modular Composability - supports reuse of existing modules in new systems

Modular Understandability - module can be understood as a stand-alone unit

Modular Continuity - side-effects due to module changes minimized

Modular Protection - side-effects due to processing errors minimized

3. Write short notes on User Interface Design.

User Interface Design means, Designing effective interfaces for software

systems.

OBJECTIVES:

Page 20: Cs2301 Se 16marks

19

To explain different interaction styles

To introduce styles of information presentation

To describe the user support which should be built-in to user interfaces

To introduce usability attributes and system approaches to system evaluation

USER INTERFACE DESIGN PRINCIPLES:

UI design must take account of the needs, experience and capabilities of the

system users. Designers should be aware of people’s physical and mental

limitations (e.g. limited short-term memory) and should recognise that people

make mistakes. UI design principles underlie interface designs although not all

principles are applicable to all designs.

User familiarity: The interface should be based on user-oriented

terms and concepts rather than computer concepts. For example, an office

system should use concepts such as letters, documents, folders etc. rather than

directories, file identifiers, etc.

Consistency: The system should display an appropriate level

of consistency. Commands and menus should have the same format; command

punctuation should be similar, etc.

Minimal surprise: If a command operates in a known way, the user should be

able to predict the operation of comparable commands

Recoverability: The system should provide some resilience to

user errors and allow the user to recover from errors. This might include an

undo facility, confirmation of destructive actions, 'soft' deletes, etc.

User guidance: Some user guidance such as help systems, on-line manuals,

etc. should be supplied

User diversity: Interaction facilities for different types of user should be

supported. For example, some users have seeing difficulties and so larger text

should be available

INTERFACE EVALUATION:

Some evaluation of a user interface design should be carried out to assess its

suitability. Full scale evaluation is very expensive and impractical for most

systems. Ideally, an interface should be evaluated against a usability specification.

However, it is rare for such specifications to be produced.

4. Write short notes on Real Time Software Design.

Real Time Software Design means, Designing embedded software systems

whose behaviour is subject to timing constraints.

Page 21: Cs2301 Se 16marks

20

OBJECTIVES:

To describe a design process for real-time systems

To explain the role of a real-time executive

To introduce generic architectures for monitoring and control and data

acquisition systems.

REAL TIME SYSTEMS:

Real Time System is the system which monitors and controls their

environment. Inevitably associated with Two hardware devices,

Sensors: Collect data from the system environment

Actuators: Change (in some way) the system's environment.

Time is critical. Real-time systems MUST respond within specified times. A

real-time system is a software system where the correct functioning of the system

depends on the results produced by the system and the time at which these results

are produced. A ‘soft’ real-time system is a system whose operation is degraded if

results are not produced according to the specified timing requirements. A ‘hard’

real-time system is a system whose operation is incorrect if results are not

produced according to the timing specification.

Real-timecontrol system

ActuatorActuator ActuatorActuator

SensorSensorSensor SensorSensorSensor

SYSTEM ELEMENTS:

Sensors control processes: Collect information from sensors. May buffer

information collected in response to a sensor stimulus

Data processor: Carries out processing of collected information and computes

the system response

Actuator control: Generates control signals for the actuator.

Page 22: Cs2301 Se 16marks

21

Dataprocessor

Actuatorcontrol

Actuator

Sensorcontrol

Sensor

Stimulus Response

SYSTEM DESIGN:

Design both the hardware and the software associated with system. Partition

functions to either hardware or software. Design decisions should be made on the

basis on non-functional system requirements. Hardware delivers better

performance but potentially longer development and less scope for change.

MONITORING AND CONTROL SYSTEMS:

This is the important class of real-time systems. Continuously check sensors

and take actions depending on sensor values Monitoring systems examine sensors

and report their results Control systems take sensor values and control

hardware actuators.

DATA ACQUISITION SYSTEMS:

Collect data from sensors for subsequent processing and analysis. Data

collection processes and processing processes may have different periods and

deadlines. Data collection may be faster than processing e.g. collecting

information about an explosion. Circular or ring buffers are a mechanism for

smoothing speed differences.

UNIT – IV TESTING

Taxonomy Of Software Testing – Types Of S/W Test – Black Box

Testing – Testing Boundary Conditions – Structural Testing – Test

Coverage Criteria Based On Data Flow Mechanisms – Regression

Testing – Unit Testing – Integration Testing – Validation Testing –

System Testing And Debugging – Software Implementation

Techniques

1. Explain Cyclomatic Complexity and its Calculation with example.

The number of tests to test all control statements equals the cyclomatic

complexity

Cyclomatic complexity equals number of conditions in a program

Page 23: Cs2301 Se 16marks

22

Useful if used with care. Does not imply adequacy of testing.

Although all paths are executed, all combinations of paths are not

executed.

Independent Paths:

1, 2, 3, 8, 9

1, 2, 3, 4, 6, 7, 2

1, 2, 3, 4, 5, 7, 2

1, 2, 3, 4, 6, 7, 2, 8, 9

Test cases should be derived so that all of these paths are executed

A dynamic program analyser may be used to check that paths have been

executed.

1

2

3

4

65

7

while bottom <= top

if (elemArray [mid] == key

(if (elemArray [mid]< key8

9

bottom > top

Level NLevel NLevel NLevel NLevel N

Level N–1 Level N–1Level N–1

Testingsequence

Testdrivers

Testdrivers

Page 24: Cs2301 Se 16marks

23

2. Explain the types of Black Box Testing in detail.

EQUIVALENCE PARTITIONING:

Equivalence partitioning is a black-box testing method that divides the input

domain of a program into classes of data from which test cases can be derived. An

ideal test case single-handedly uncovers a class of errors (e.g., incorrect processing

of all character data) that might otherwise require many cases to be executed before

the general error is observed. Equivalence partitioning strives to define a test case

that uncovers classes of errors, thereby reducing the total number of test cases that

must be developed.

Test case design for equivalence partitioning is based on an evaluation of

equivalence classes for an input condition. Using concepts introduced in the

preceding section, if a set of objects can be linked by relationships that are

symmetric, transitive, and reflexive, an equivalence class is present. An

equivalence class represents a set of valid or invalid states for input conditions.

Typically, an input condition is a specific numeric value, a range of values, a set of

related values, or a Boolean condition.

Equivalence classes may be defined according to the following guidelines:

If an input condition specifies a range, one valid and two invalid equivalence

classes are defined.

If an input condition requires a specific value, one valid and two invalid

equivalence classes are defined.

If an input condition specifies a member of a set, one valid and one invalid

equivalence class are defined.

If an input condition is Boolean, one valid and one invalid class are defined.

As an example, consider data maintained as part of an automated banking

application.

The user can access the bank using a personal computer, provide a six-digit

password, and follow with a series of typed commands that trigger various banking

functions. During the log-on sequence, the software supplied for the banking

application accepts data in the form area code—blank or three-digit number

Prefix — three-digit number not beginning with 0 or 1

Suffix — four-digit number

Page 25: Cs2301 Se 16marks

24

Password — six digit alphanumeric string

Commands — check, deposit, bill pay, and the like

The input conditions associated with each data element for the banking

application can be specified as area code: Input condition, Boolean — the area

code may or may not be present.

Input Condition, range — values defined between 200 and 999, with specific

exceptions.

Prefix: Input condition, range — specified value >200

Input condition, value — four-digit length

Password: Input Condition, Boolean — a password may or may not be present

Input Condition, value — six-character string

Command: Input Condition, set — containing commands noted previously.

Applying the guidelines for the derivation of equivalence classes, test cases for

each input domain data item can be developed and executed. Test cases are

selected so that the largest numbers of attributes of an equivalence class are

exercised at once.

BOUNDARY VALUE ANALYSIS:

For reasons that are not completely clear, a greater number of errors tends to

occur at the boundaries of the input domain rather than in the "center." It is for this

reason that Boundary Value Analysis (BVA) has been developed as a testing

technique. Boundary value analysis leads to a selection of test cases that exercise

bounding values. Boundary value analysis is a test case design technique that

complements equivalence partitioning. Rather than selecting any element of an

equivalence class, BVA leads to the selection of test cases at the "edges" of the

class. Rather than focusing solely on input conditions, BVA derives test cases from

the output domain as well

Guidelines for BVA are similar in many respects to those provided for Equivalence

Partitioning:

If an input condition specifies a range bounded by values a and b, test cases

should be designed with values a and b and just above and just below a and b.

Page 26: Cs2301 Se 16marks

25

If an input condition specifies a number of values, test cases should be

developed that exercise the minimum and maximum numbers. Values just

above and below minimum and maximum are also tested.

Apply guidelines 1 and 2 to output conditions. For example, assume that

temperature vs. pressure table is required as output from an engineering

analysis program. Test cases should be designed to create an output report that

produces the maximum (and minimum) allowable number of table entries.

If internal program data structures have prescribed boundaries (e.g., an array

has a defined limit of 100 entries), be certain to design a test case to exercise

the data structure at its boundary. Most software engineers intuitively perform

BVA to some degree. By applying these guidelines, boundary testing will be

more complete, thereby having a higher likelihood for error detection.

3. Explain Unit Testing and Structural Testing in detail.

UNIT TESTING:

In unit testing the individual components are tested independently to ensure

their quantity. The focus is to uncover the errors in design and implementation.

The various tests that are conducted during the unit test are described as below.

Module interfaces are tested for proper information flow in and out of the

program.

Local data are examined to ensure that integrity is maintained.

Boundary conditions are tested to ensure that the module operates properly at

boundary established to limit or restrict processing.

All the basis (independent) paths are tested for ensuring that all statements in

the module have been executed only once.

All error handling paths should be tested.

Page 27: Cs2301 Se 16marks

26

a. Drivers and Stub software need to be developed to test incomplete software.

The driver is a program that accepts the test data and prints the relevant results

and the stub is a subprogram that uses the module interfaces and performs the

minimal data if required. This is given by following figure.

b. The unit testing is simplified when a component with high cohesion(with one

function) is designed.in such a design the number of test cases are less and

one can easily predict or uncover errors.

Page 28: Cs2301 Se 16marks

27

STRUCTURAL TESTING:

White-box testing, sometimes called glass-box testing is a test case design

method that uses the control structure of the procedural design to derive test cases.

Using white-box testing methods, the software engineer can derive test cases that

(1) guarantee that all independent paths within a module have been exercised at

least once,(2) exercise all logical decisions on their true and false sides, (3) execute

all loops at their boundaries and within their operational bounds, and (4) exercise

internal data structures to ensure their validity.

A reasonable question might be posed at this juncture: "Why spend time and

energy worrying about (and testing) logical minutiae when we might better expend

effort ensuring that program requirements have been met?" Stated another way,

why don’t we spend all of our energy on black-box tests? The answer lies in the

nature of software defects

Logic errors and incorrect assumptions are inversely proportional to the

probability that a program path will be executed. Errors tend to creep into our

work when we design and implement function, conditions, or controls that are out

of the mainstream. Everyday processing tends to be well understood, while "special

case" processing tends to fall into the cracks.

We often believe that a logical path is not likely to be executed when, in fact, it

may be executed on a regular basis. The logical flow of a program is

sometimes counterintuitive, meaning that our unconscious assumptions about

flow of control and data may lead us to make design errors that are uncovered

only once path testing commences.

Typographical errors are random. When a program is translated into

programming language source code, it is likely that some typing errors will

occur. Many will be uncovered by syntax and type checking mechanisms, but

others may go undetected until testing begins. It is as likely that a typo will

exist on an obscure logical path as on a mainstream path.

Basis Path Testing:

Basis path testing is a white-box testing technique first proposed by Tom

McCabe. The basis path method enables the test case designer to derive a logical

complexity measure of a procedural design and use this measure as a guide for

defining a basis set of execution paths. Test cases derived to exercise the basis set

Page 29: Cs2301 Se 16marks

28

are guaranteed to execute every statement in the program at least one time during

testing.

Flow Graph Notation:

Before the basis path method can be introduced, a simple notation for the

representation of control flow, called a flow graph (or program graph) must be

introduced. The flow graph depicts logical control flow using the notation

illustrated in Figure. Each structured construct has a corresponding flow graph

symbol.

To illustrate the use of a flow graph, we consider the procedural design

representation in Figure. Here, a flowchart is used to depict program control

structure

Figure maps the flowchart into a corresponding flow graph (assuming that no

compound conditions are contained in the decision diamonds of the flowchart).

Referring to Figure, each circle, called a flow graph node, represents one or more

procedural statements. A sequence of process boxes and a decision diamond can

map into a single node. The arrows on the flow graph, called edges or links,

represent flow of control and are analogous to flowchart arrows. An edge must

terminate at a node, even if the node does not represent any procedural statements

(e.g., see the symbol for the if-then-else construct). Areas bounded by edges and

nodes are called regions. When counting regions, we include the area outside the

graph as a region. When compound conditions are encountered in a procedural

design, the generation of a flow graph becomes slightly more complicated. A

compound condition occurs when one or more Boolean operators (logical OR,

AND, NAND, NOR) is present in a conditional statement. Referring to Figure, the

PDL segment translates into the flow graph shown. Note that a separate node is

Page 30: Cs2301 Se 16marks

29

created for each of the conditions a and b in the statement IF a OR b. Each node

that contains a condition is called a predicate node and is characterized by two or

more edges emanating from it.

4. Explain the Regression Testing and Integration Testing in detail.

REGRESSION TESTING:

Regression testing is testing done to check that a system update does not re-

introduce errors that have been corrected earlier.

All – or almost all – regression tests aim at checking the,

Functionality – black box tests.

Architecture – grey box tests

Since they are supposed to test all functionality and all previously done changes,

regression tests are usually large.

Thus, regression testing needs automatic,

Execution – no human intervention

Checking. Leaving the checking to developers will not work.

We face the same challenge when doing automating regression test as we face

when doing automatic test checking in general:

Which parts of the output should be checked against the oracle?

This question gets more important as we need to have more version of the

same test due to system variability.

Simple but annoying – and sometimes expensive – problems are e.g.

Use of date in the test output

Changes in number of blanks or line shifts

Other format changes

Changes in lead texts

Simple but annoying – and sometimes expensive – problems are e.g.

Use of date in the test output

Page 31: Cs2301 Se 16marks

30

Changes in number of blanks or line shifts

Other format changes

Changes in lead texts

Regression testing is a critical part of testing, but is often overlooked.

Whenever a defect gets fixed, a new feature gets added, code gets re-factored

or changed in any way, and there is always a chance that the changes may

break something, that was previously working. Regression testing is the

testing of features, functions, etc. that have been tested before, to make sure

they still work, after a change has been made to software.

Within a set of release cycles, the flow is typically as follows: The testers will

test the software and find several defects. The developers will fix the defects,

possibly add a few more features and give it back to be tested. The testers will

then have not only tested the new features, but test all of the old features to

make sure they still work.

Questions often arise as to how much Regression Testing needs to be done.

Ideally, in Regression Testing, everything would be tested just as thoroughly

as it was the first time, but this becomes impractical as time goes on and more

and more features and, therefore, test cases get added. When looking at which

tests to execute during regression testing, some compromises need to be made.

When deciding, you will want to focus on what has changed. If a feature has

been significantly added to or changed, then you will want to execute a lot of

tests against this feature. If a defect has been fixed in a particular area you will

want to check that area to see that the fix didn't cause new defects. If, on the

other hand, a feature has been working well for some time and hasn't been

modified only a quick test may need to be executed.

INTEGRATION TESTING:

Integration Testing,

Tests complete systems or subsystems composed of integrated components

Integration testing should be black-box testing with tests derived from the

specification

Page 32: Cs2301 Se 16marks

31

Main difficulty is localising errors

Incremental integration testing reduces this problem.

Approaches to Integration Testing:

Top-down Testing:

Start with high-level system and integrate from the top-down replacing

individual components by stubs where appropriate.

Level 2Level 2Level 2Level 2

Level 1 Level 1Testing

sequence

Level 2stubs

Level 3stubs

. . .

Bottom-up Testing:

Integrate individual components in levels until the complete system is

created.

T3

T2

T1

T4

T5

A

B

C

D

T2

T1

T3

T4

A

B

C

T1

T2

T3

A

B

Test sequence1

Test sequence2

Test sequence3

Page 33: Cs2301 Se 16marks

32

Level NLevel NLevel NLevel NLevel N

Level N–1 Level N–1Level N–1

Testingsequence

Testdrivers

Testdrivers

.

In practice, most integration involves a combination of these strategies

UNIT – V SOFTWARE PROJECT MANAGEMENT

Measures And Measurements – ZIPF’s Law – Software Cost Estimation –

Function Point Models – COCOMO Model – Delphi Method – Scheduling

– Earned Value Analysis – Error Tracking – Software Configuration

Management – Program Evolution Dynamics – Software Maintenance –

Project Planning – Project Scheduling– Risk Management – CASE Tools

1. Explain Software Cost Estimation in detail?

Software productivity

Estimation techniques

Algorithmic cost modelling

Project duration and staffing

Fundamental estimation questions are,

How much effort is required to complete an activity?

How much calendar time is needed to complete an activity?

What is the total cost of an activity?

Project estimation and scheduling are interleaved management activities.

Page 34: Cs2301 Se 16marks

33

SOFTWARE COST COMPONENTS:

Hardware and software costs.

Travel and training costs.

Effort costs (the dominant factor in most projects)

o The salaries of engineers involved in the project;

o Social and insurance costs.

Effort costs must take overheads into account

o Costs of building, heating, lighting.

o Costs of networking and communications.

o Costs of shared facilities (e.g library, staff restaurant,

etc.).

COSTING AND PRICING:

Estimates are made to discover the cost, to the developer, of producing a

software system.

There is not a simple relationship between the development cost and the price

charged to the customer.

Broader organisational, economic, political and business considerations

influence the price charged.

SOFTWARE PRODUCTIVITY:

A measure of the rate at which individual engineers involved in software

development

produce software and associated documentation.

Not quality-oriented although quality assurance is a factor in productivity

assessment.

Essentially, we want to measure useful functionality produced per time unit.

PRODUCTIVITY MEASURES:

Page 35: Cs2301 Se 16marks

34

Size related measures based on some output from the software process. This

may be lines of delivered source code, object code instructions, etc.

Function-related measures based on an estimate of the functionality of the

delivered software. Function-points are the best known of this type of

measure.

LINES OF CODE:

What’s a line of code?

o The measure was first proposed when programs were typed on cards

with one line per card;

o How does this correspond to statements as in Java which can span

several lines or where there can be several statements on one line.

What programs should be counted as part of the system?

This model assumes that there is a linear relationship between system size and

volume of documentation.

ESTIMATION TECHNIQUES:

There is no simple way to make an accurate estimate of the effort required to

develop a software system

o Initial estimates are based on inadequate information in a user

requirements definition;

o The software may run on unfamiliar computers or use new technology;

o The people in the project may be unknown.

Project cost estimates may be self-fulfilling

o The estimate defines the budget and the product is adjusted to meet the

budget.

Algorithmic cost modelling.

Expert judgement.

Page 36: Cs2301 Se 16marks

35

Estimation by analogy.

Parkinson's Law.

Pricing to win.

PROJECT DURATION AND STAFFING:

As well as effort estimation, managers must estimate the calendar time

required completing a project and when staff will be required.

Calendar time can be estimated using a COCOMO 2 formula

o TDEV = 3 ´ (PM)(0.33+0.2*(B-1.01))

o PM is the effort computation and B is the exponent computed as

discussed above (B is 1 for the early prototyping model). This

computation predicts the nominal schedule for the project.

The time required is independent of the number of people working on the

project.

STAFFING REQUIREMENTS:

Staff required can’t be computed by diving the development time by the

required schedule.

The number of people working on a project varies depending on the phase of

the project.

The more people who work on the project, the more total effort is usually

required.

A very rapid build-up of people often correlates with schedule slippage.

2. Explain Software Configuration Management in detail?

a) Configuration Management Planning

b) Change Management

Page 37: Cs2301 Se 16marks

36

c) Version and Release Management

d) System Building

e) CASE tools for Configuration Management

CONFIGURATION MANAGEMENT PLANNING:

All products of the software process may have to be managed:

a. Specifications;

b. Designs;

c. Programs;

d. Test data;

e. User manuals.

Thousands of separate documents may be generated for a large, complex

software system.

The Configuration Management Plans:

Defines the types of documents to be managed and a document naming

scheme.

Defines who takes responsibility for the CM procedures and creation

of baselines.

Defines policies for change control and version management.

Defines the CM records which must be maintained.

Describes the tools which should be used to assist the CM process and

any limitations on their use.

Defines the process of tool use.

Defines the CM database used to record configuration information.

May include information such as the CM of external software, process

auditing, etc.

Page 38: Cs2301 Se 16marks

37

Configuration Hierarchy:

PCL-TOOLS

EDIT

STRUCTURES

BIND

FORM

COMPILE MAKE-GEN

HELP

DISP LAY QUERY

AST-INTERFACEFORM-SPECS FORM-IO

CODEOBJECTS TESTS

The Configuration Database:

All CM information should be maintained in a configuration database.

This should allow queries about configurations to be answered:

o Who has a particular system version?

o What platform is required for a particular version?

o What versions are affected by a change to component X?

o How many reported faults in version T?

The CM database should preferably be linked to the software being managed.

New versions of software systems are created as they change:

o For different machines/OS;

o Offering different functionality;

o Tailored for particular user requirements.

Configuration management is concerned with managing evolving software

systems:

o System change is a team activity;

o CM aims to control the costs and effort involved in making changes to

a system.

Page 39: Cs2301 Se 16marks

38

It involves the development and application of procedures and standards to

manage an evolving software product. The CM may be seen as part of a more general

quality management process. When released to CM, software systems are sometimes

called baselines as they are a starting point for further development.

Server

version

Linux

version

PC versionInitial

system

Desktop

versionWindows XP

version

HP

version

Sun

version

CM Standards:

CM should always be based on a set of standards which are applied within

an organisation.

Standards should define how items are identified; how changes are

controlled and how new versions are managed.

Standards may be based on external CM standards (e.g. IEEE standard for

CM).

Some existing standards are based on a waterfall process model - new CM

standards are needed for evolutionary development.

CHANGE MANAGEMENT:

Change management is a procedural process so it can be modelled and

integrated with a version management system.

Change management tools

Form editor to support processing the change request forms;

Workflow system to define who does what and to automate information

transfer;

Page 40: Cs2301 Se 16marks

39

Change database that manages change proposals and is linked to a VM

system;

Change reporting system that generates management reports on the status

of change requests.

VERSION AND RELEASE MANAGEMENT:

Version and Release Identification: Systems assign identifiers

automatically when a new version is submitted to the system.

Storage Management: System stores the differences between versions

rather than all the version code.

Change History Recording: Record reasons for version creation.

Independent Development: Only one version at a time may be checked

out for change. Parallel working on different versions.

Project Support: Manages groups of files associated with a project rather than

just single files.

SYSTEM BUILDING:

It is easier to find problems that stem from component interactions early in the

process.

This encourages thorough unit testing - developers are under pressure not to

‘break the build’.

A stringent change management process is required to keep track of problems

that have been discovered and repaired.

CASE TOOLS FOR CONFIGURATION MANAGEMENT:

CM processes are standardised and involve applying pre-defined procedures.

Large amounts of data must be managed.

CASE tool support for CM is therefore essential.

Mature CASE tools to support configuration management are available

ranging from stand-alone tools to integrated CM workbenches

Page 41: Cs2301 Se 16marks

40

3. Explain COCOMO Model in detail?

COCOMO MODELS:

COCOMO has three different models that reflect the complexity:

The Basic Model

The Intermediate Model

The Detailed Model

The Development Modes: Project Characteristics:

Organic Mode:

o Relatively small, simple software projects

o Small teams with good application experience work to a set of less

than rigid requirements

o Similar to the previously developed projects

o relatively small and requires little innovation

Semidetached Mode:

o Intermediate (in size and complexity) software projects in which

teams with mixed experience levels must meet a mix of rigid and less

than rigid requirements.

Embedded Mode:

o Software projects that must be developed within a set of tight

hardware, software, and operational constraints

Some Assumptions:

Primary cost driver is the number of Delivered Source Instructions (DSI) /

Delivered Line Of Code developed by the project

Page 42: Cs2301 Se 16marks

41

COCOMO estimates assume that the project will enjoy good management by

both the developer and the customer

Assumes the requirements specification is not substantially changed after the

plans and requirements phase

Basic COCOMO is good for quick, early, rough order of magnitude

estimates of software costs

It does not account for differences in hardware constraints, personnel

quality and experience, use of modern tools and techniques, and other project

attributes known to have a significant influence on software costs, which

limits its accuracy.

BASIC COCOMO MODEL:

Formula:

E=ab (KLOC or KDSI) b

b

D=cb (E) d

b

P=E/D

where E is the effort applied in person-months, D is the development time in

chronological months, KLOC / KDSI is the estimated number of delivered lines of

code for the project (expressed in thousands), and P is the number of people required.

The coefficients ab, bb, cb and db are given in next slide.

Software project ab bb cb db

Organic 2.4 1.05 2.5 0.38

Semi-detached 3.0 1.12 2.5 0.35

Embedded 3.6 1.20 2.5 0.32

Page 43: Cs2301 Se 16marks

42

Equation:

Mode Effort Schedule

Organic E=2.4*(KDSI)1.05

TDEV=2.5*(E)0.38

Semidetached E=3.0*(KDSI)1.12

TDEV=2.5*(E)0.35

Embedded E=3.6*(KDSI)1.20

TDEV=2.5*(E)0.32

Limitation:

Its accuracy is necessarily limited because of its lack of factors which have a

significant influence on software costs

The Basic COCOMO estimates are within a factor of 1.3 only 29% of the

time, and within a factor of 2 only 60% of the time

Example:

We have determined our project fits the characteristics of Semi-Detached

mode

We estimate our project will have 32,000 Delivered Source Instructions.

Using the formulas, we can estimate:

Effort = 3.0*(32) 1.12

= 146 man-months

Schedule = 2.5*(146) 0.35

= 14 months

Productivity = 32,000 DSI / 146 MM

= 219 DSI/MM

Average Staffing = 146 MM /14 months

= 10 FSP