UNIT - I SOFTWARE PROCESS MODEL Software software product. · UNIT - I SOFTWARE PROCESS MODEL...
Transcript of UNIT - I SOFTWARE PROCESS MODEL Software software product. · UNIT - I SOFTWARE PROCESS MODEL...
UNIT - I SOFTWARE PROCESS MODEL
Software is more than just a program code. A program is an executable code, which serves some
computational purpose. Software is considered to be collection of executable programming code,
associated libraries and documentations. Software, when made for a specific requirement is
called software product.
Engineering on the other hand, is all about developing products, using well-defined, scientific
principles and methods.
Definition of software: - it is systematic approach to the development, operation, maintenance
and retirement of software. It is the application of computer science along with mathematics and
ergative science. In the current scenario the S.E has a specific importance for making particular
software.
Set of instruction-> this is program.
Set of program-> software.
DIFFERENCE OF PROGRAM AND SOFTWARE
This software is a collection of computer programs, procedure, rules and associative
documentation and data. Program is generally used the developer of a specific program to make
a particular software.
program software
1) Small in size.
2) Authors himself is user-soul.
3) Single developer.
4) Adopt development.
5) Lack proper interface.
6) Large proper documentation.
1) Large in size.
2) Large number.
3) Team developer.
4) Systematic development.
5) Well define interface.
6) Well documented.
Some characteristics of software includes:-
1) Software is developed or engineer.
2) Most of software is custom build rather than assemble from existing component.
3) Computer program and associated documentation.
4) Easy to modified.
5) Easy to reproduce.
6) Software product may be developed for a particular customer or for the general market.
Why software engineering:-
1) In the late 1960’s hardware price were falling but software price rising.
2) Many software projects failed.
3) Large software project required large development loams.
4) Many software project late and over budget.
5) Complicity of software project is increased.
6) Demand for new software on the market.
Why study software engineering?
1) Higher productivity.
2) To acquire skills to develop large programs.
3) Ability to solve complex programming problems.
4) Learn techniques of specification design.
5) Better quality programmers.
Applications of software:-
1) System software
2) Application software.
3) Engineering/scientific software.
4) Embedded software.
5) Product line software.
6) Web application software.
7) Artificial intelligence software (AI).
Software engineering is an engineering branch associated with development of software
product using well-defined scientific principles, methods and procedures. The outcome of
software engineering is an efficient and reliable software product.
Definitions
IEEE defines software engineering as:
(1) The application of a systematic,disciplined,quantifiable approach to the
development,operation and maintenance of software; that is, the application of engineering to
software.
(2) The study of approaches as in the above statement.
Fritz Bauer, a German computer scientist, defines software engineering as:
Software engineering is the establishment and use of sound engineering principles in order to
obtain economically software that is reliable and work efficiently on real machines.
Software Evolution
The process of developing a software product using software engineering principles and methods
is referred to as software evolution. This includes the initial development of software and its
maintenance and updates, till desired software product is developed, which satisfies the expected
requirements.
Evolution starts from the requirement gathering process. After which developers create a
prototype of the intended software and show it to the users to get their feedback at the early stage
of software product development. The users suggest changes, on which several consecutive
updates and maintenance keep on changing too. This process changes to the original software,
till the desired software is accomplished.
Even after the user has desired software in hand, the advancing technology and the changing
requirements force the software product to change accordingly. Re-creating software from
scratch and to go one-on-one with requirement is not feasible. The only feasible and economical
solution is to update the existing software so that it matches the latest requirements.
Software Evolution Laws
Lehman has given laws for software evolution. He divided the software into three different
categories:
S-type (static-type) - This is a software, which works strictly according to defined specifications
and solutions. The solution and the method to achieve it, both are immediately understood before
coding. The s-type software is least subjected to changes hence this is the simplest of all. For
example, calculator program for mathematical computation.
P-type (practical-type) - This is a software with a collection of procedures. This is defined by
exactly what procedures can do. In this software, the specifications can be described but the
solution is not obvious instantly. For example, gaming software.
E-type (embedded-type) - This software works closely as the requirement of real-
world environment. This software has a high degree of evolution as there are various changes in
laws, taxes etc. in the real world situations. For example, Online trading software.
Software Paradigms
Software paradigms refer to the methods and steps, which are taken while designing the
software. There are many methods proposed and are in work today, but we need to see where in
the software engineering these paradigms stand. These can be combined into various categories,
though each of them is contained in one another:
Programming paradigm is a subset of Software design paradigm which is further a subset of
Software development paradigm.
Software Development Paradigm
This Paradigm is known as software engineering paradigms where all the engineering concepts
pertaining to the development of software are applied. It includes various researches and
requirement gathering which helps the software product to build. It consists of –
Requirement gathering
Software design
Programming
Software Design Paradigm
This paradigm is a part of Software Development and includes –
Design
Maintenance
Programming
Programming Paradigm
This paradigm is related closely to programming aspect of software development. This includes
–
Coding
Testing
Integration
Need of Software Engineering
The need of software engineering arises because of higher rate of change in user requirements
and environment on which the software is working.
Large software - It is easier to build a wall than to a house or building, likewise, as the size of
software become large engineering has to step to give it a scientific process.
Scalability- If the software process were not based on scientific and engineering concepts, it
would be easier to re-create new software than to scale an existing one.
Cost- As hardware industry has shown its skills and huge manufacturing has lower down he
price of computer and electronic hardware. But the cost of software remains high if proper
process is not adapted.
Dynamic Nature- The always growing and adapting nature of software hugely depends upon the
environment in which user works. If the nature of software is always changing, new
enhancements need to be done in the existing one. This is where software engineering plays a
good role.
Quality Management- Better process of software development provides better and quality
software product.
Characteristics of good software
A software product can be judged by what it offers and how well it can be used. This software
must satisfy on the following grounds:
Operational
Transitional
Maintenance
Well-engineered and crafted software is expected to have the following characteristics:
Operational
This tells us how well software works in operations. It can be measured on:
Budget
Usability
Efficiency
Correctness
Functionality
Dependability
Security
Safety
Transitional
This aspect is important when the software is moved from one platform to another:
Portability
Interoperability
Reusability
Adaptability
Maintenance
This aspect briefs about how well a software has the capabilities to maintain itself in the ever-
changing environment:
Modularity
Maintainability
Flexibility
Scalability
In short, Software engineering is a branch of computer science, which uses well-defined
engineering concepts required to produce efficient, durable, scalable, in-budget and on-time
software products.
Software Development Life Cycle, SDLC for short, is a well-defined, structured sequence of
stages in software engineering to develop the intended software product.
SDLC Activities
SDLC provides a series of steps to be followed to design and develop a software product
efficiently. SDLC framework includes the following steps:
Communication
This is the first step where the user initiates the request for a desired software product. He
contacts the service provider and tries to negotiate the terms. He submits his request to the
service providing organization in writing.
Requirement Gathering
This step onwards the software development team works to carry on the project. The team holds
discussions with various stakeholders from problem domain and tries to bring out as much
information as possible on their requirements. The requirements are contemplated and segregated
into user requirements, system requirements and functional requirements. The requirements are
collected using a number of practices as given -
studying the existing or obsolete system and software,
conducting interviews of users and developers,
referring to the database or
collecting answers from the questionnaires.
Feasibility Study
After requirement gathering, the team comes up with a rough plan of software process. At this
step the team analyzes if a software can be made to fulfill all requirements of the user and if
there is any possibility of software being no more useful. It is found out, if the project is
financially, practically and technologically feasible for the organization to take up. There are
many algorithms available, which help the developers to conclude the feasibility of a software
project.
System Analysis
At this step the developers decide a roadmap of their plan and try to bring up the best software
model suitable for the project. System analysis includes Understanding of software product
limitations, learning system related problems or changes to be done in existing systems
beforehand, identifying and addressing the impact of project on organization and personnel etc.
The project team analyzes the scope of the project and plans the schedule and resources
accordingly.
Software Design
Next step is to bring down whole knowledge of requirements and analysis on the desk and design
the software product. The inputs from users and information gathered in requirement gathering
phase are the inputs of this step. The output of this step comes in the form of two designs; logical
design and physical design. Engineers produce meta-data and data dictionaries, logical diagrams,
data-flow diagrams and in some cases pseudo codes.
Coding
This step is also known as programming phase. The implementation of software design starts in
terms of writing program code in the suitable programming language and developing error-free
executable programs efficiently.
Testing
An estimate says that 50% of whole software development process should be tested. Errors may
ruin the software from critical level to its own removal. Software testing is done while coding by
the developers and thorough testing is conducted by testing experts at various levels of code such
as module testing, program testing, product testing, in-house testing and testing the product at
user’s end. Early discovery of errors and their remedy is the key to reliable software.
Integration
Software may need to be integrated with the libraries, databases and other program(s). This stage
of SDLC is involved in the integration of software with outer world entities.
Implementation
This means installing the software on user machines. At times, software needs post-installation
configurations at user end. Software is tested for portability and adaptability and integration
related issues are solved during implementation.
Operation and Maintenance
This phase confirms the software operation in terms of more efficiency and less errors. If
required, the users are trained on, or aided with the documentation on how to operate the
software and how to keep the software operational. The software is maintained timely by
updating the code according to the changes taking place in user end environment or technology.
This phase may face challenges from hidden bugs and real-world unidentified problems.
Disposition
As time elapses, the software may decline on the performance front. It may go completely
obsolete or may need intense upgradation. Hence a pressing need to eliminate a major portion of
the system arises. This phase includes archiving data and required software components, closing
down the system, planning disposition activity and terminating system at appropriate end-of-
system time.
Software Development Lifecycle (SDLC)
Software development lifecycle or SDLC is a structured, tried and tested methodology for
development of software products across the world. SDLC consists of a series of steps that any
software developer should follow in order to build a software product effectively and efficiently.
All the software development lifecycle approaches that we are going to discuss below can
accommodate the generic paradigm discussed in the previous section. We have five major
approaches or models proposed by different experts. Before we get into the models, let us
discuss briefly about the different steps involved in software development, which is common to
all models. The different steps involved are:
Planning –
Here the detailed requirement analysis is done, followed by a detailed roadmap for software
development. List of deliverables, various stakeholders involved and clear roles and
responsibilities are laid out along with appropriate documentation and reporting metrics. This
gives the client and overview of the whole plan.
Defining –
All the client requirements are broken down in detail. Top level requirements are broken down
into specific set of modules and sub-components. Every module and sub-component is defined
properly along with its functionalities and how it would be integrated.
Designing –
In this step, the detailed requirements are analysed and a software design is made. Software
design consists of two different types of designs – logical design and physical design. In logical
design, the overall architecture along with all modules and sub-modules, data transfer between
then and how it integrates with the existing system – is clearly drafted through various diagrams.
Physical design is all about the hardware requirements for software deployment.
Development –
This is the actual step where the software development or coding happens. The modules as
drafted in the design phase are given to multiple teams for coding.
Testing –
All the modules, sub-components that are developed are tested for their respective functionalities
here. This step utilises multiple tools and techniques to identify and fix defects.
Deployment –
The software developed is implemented in client premises. The development team works along
with client teams to ensure that the deployment is smooth and seamless.
INTRODUCTION - SOFTWARE PROCESS MODEL
DEFINITION:
A software process model is an abstract representation of a process. It presents a description of
a process. When we describe and discuss processes, we usually talk about the activities in
these processes such as specifying a data model, designing a user interface, etc. and the ordering
of these activities.
In software development life cycle, various models are designed and defined. These models are
called as Software Development Process Models.
On the basis of project motive, the software development process model is selected for
development.
Following are the different software development process models:
1) Big-Bang model
2) Code-and-fix model
3) Waterfall model
4) V model
5) Incremental model
6) RAD model
7) Agile model
8) Iterative model
9) Spiral model
10) Prototype model
The following framework activities are carried out irrespective of the process model chosen by
the organization.
1. Communication
2. Planning
3. Modeling
4. Construction
5. Deployment
PRESCRIPTIVE PROCESS MODELS
The name 'prescriptive' is given because the model prescribes a set of activities, actions, tasks,
quality assurance and change the mechanism for every project.
There are three types of prescriptive process models. They are:
1. The Waterfall Model
2. Incremental Process model
3. RAD model
1. The Waterfall Model
The waterfall model is also called as 'Linear sequential model' or 'Classic life cycle
model'.
In this model, each phase is fully completed before the beginning of the next phase.
This model is used for the small projects.
In this model, feedback is taken after each phase to ensure that the project is on the right
path.
Testing part starts only after the development is complete.
NOTE: The description of the phases of the waterfall model is same as that of the process
model.
An alternative design for 'linear sequential model' is as follows:
The Waterfall Model also referred to as the sequential lifecycle model was the first model
to be accepted widely in the software industry. This model applies best to projects where there is
minimum uncertainty.
In the waterfall model, every phase of development is completed fully before the next
phase begins. For this reason, each phase is well-defined with a definitive set of objectives.
THE VARIOUS PHASES ARE
Requirements – In this phase, all the requirements are clearly identified and necessary
clarifications are done with the client. The detailed requirements thus obtained are
documented clearly in the specification document.
Design – Based on the requirements gathered, a system design is developed in the design
phase. Specific hardware and system requirements are clearly identified and the overall
system architecture is designed in this phase.
Implementation – Once the system design and architecture are developed, the software
program is developed. If it’s a complex software, then it is broken down into different
modules and then these modules are developed.
Verification – The software program developed is verified and tested thoroughly to see that
all the required functionalities are included. The verification is first for individual modules
and then once the modules are integrated, the overall functionality is verified and tested.
Maintenance – Even after all the rigorous development and testing is done, there could be
some issues that may not be captured during the initial phases. As and when such issues are
captured, they are taken up for development and deployment during the maintenance phase.
In this phase, such issues are resolved through patches of code. Sometimes, the software by
itself is upgraded with better features during maintenance phase. Mostly maintenance is done
after the system gets integrated in the customer environment.
Advantages of waterfall model
The waterfall model is simple and easy to understand, implement, and use.
All the requirements are known at the beginning of the project, hence it is easy to
manage.
It avoids overlapping of phases because each phase is completed at once.
This model works for small projects because the requirements are understood very well.
This model is preferred for those projects where the quality is more important as
compared to the cost of the project.
Disadvantages of the waterfall model
This model is not good for complex and object oriented projects.
It is a poor model for long projects.
The problems with this model are uncovered, until the software testing.
The amount of risk is high.
V – model
The major drawback of waterfall model is we move to the next stage only when the previous one
is finished and there was no chance to go back if something is found wrong in later stages. V-
Model provides means of testing of software at each stage in reverse manner.
At every stage, test plans and test cases are created to verify and validate the product according
to the requirement of that stage. For example, in requirement gathering stage the test team
prepares all the test cases in correspondence to the requirements. Later, when the product is
developed and is ready for testing, test cases of this stage verify the software against its validity
towards requirements at this stage.
This makes both verification and validation go in parallel. This model is also known as
verification and validation model.
2. Incremental Process model
The incremental model combines the elements of waterfall model and they are applied in an
iterative fashion.
The first increment in this model is generally a core product.
Each increment builds the product and submits it to the customer for any suggested
modifications.
The next increment implements on the customer's suggestions and add additional requirements in
the previous increment.
This process is repeated until the product is finished.
For example, the word-processing software is developed using the incremental model.
Advantages of incremental model
This model is flexible because the cost of development is low and initial product delivery is
faster.
It is easier to test and debug during the smaller iteration.
The working software generates quickly and early during the software life cycle.
The customers can respond to its functionalities after every increment.
Disadvantages of the incremental model
The cost of the final product may cross the cost estimated initially.
This model requires a very clear and complete planning.
The planning of design is required before the whole system is broken into small increments.
The demands of customer for the additional functionalities after every increment causes
problem during the system architecture.
RAD (Rapid Application Development) model
RAD is a Rapid Application Development model.
Using the RAD model, software product is developed in a short period of time.
The initial activity starts with the communication between customer and developer.
Planning depends upon the initial requirements and then the requirements are divided into
groups.
Planning is more important to work together on different modules.
The RAD model consist of following phases:
1) Business Modeling
2) Data modeling
3) Process modeling
4) Application generation
5) Testing and turnover
1) Business Modeling
Business modeling consists of the flow of information between various functions in the
project.
For example, what type of information is produced by every function and which are the
functions to handle that information.
It is necessary to perform complete business analysis to get the essential business
information.
2) Data modeling
The information in the business modeling phase is refined into the set of objects and it is
essential for the business.
The attributes of each object are identified and defined the relationship between objects.
3) Process modeling
The data objects defined in the data modeling phase are changed to fulfil the information
flow to implement the business model.
The process description is created for adding, modifying, deleting or retrieving a data object.
4) Application generation
In the application generation phase, the actual system is built.
To construct the software the automated tools are used.
5) Testing and turnover
The prototypes are independently tested after each iteration so that the overall testing time is
reduced.
The data flow and the interfaces between all the components are fully tested. Hence, most of
the programming components are already tested.
Advantages of RAD Model
The process of application development and delivery are fast.
This model is flexible, if any changes are required.
Reviews are taken from the clients at the staring of the development hence there are lesser
chances to miss the requirements.
Disadvantages of RAD Model
The feedback from the user is required at every development phase.
This model is not a good choice for long term and large projects.
Evolutionary Models
• Software system evolves over time as requirements often change as development proceeds.
Thus, a straight line to a complete end product is not possible. However, a limited version must
be delivered to meet competitive pressure.
• Usually a set of core product or system requirements is well understood, but the details and
extension have yet to be defined.
• You need a process model that has been explicitly designed to accommodate a product that
evolved over time.
• It is iterative that enables you to develop increasingly more complete version of the software.
• Two types are introduced, namely Prototyping and Spiral models.
Evolutionary Models: PROTOTYPING
• When to use: Customer defines a set of general objectives but does not identify detailed
requirements for functions and features. Or Developer may be unsure of the efficiency of an
algorithm, the form that human computer interaction should take.
• What step: Begins with communication by meeting with stakeholders to define the objective,
identify whatever requirements are known, outline areas where further definition is mandatory. A
quick plan for prototyping and modeling (quick design) occur. Quick design focuses on a
representation of those aspects the software that will be visible to end users. ( interface and
output). Design leads to the construction of a prototype which will be deployed and evaluated.
Stakeholder’s comments will be used to refine requirements.
• Both stakeholders and software engineers like the prototyping paradigm. Users get a feel for
the actual system, and developers get to build something immediately. However, engineers may
make compromises in order to get a prototype working quickly. The lessthan- ideal choice may
be adopted forever after you get used to it.
The Prototyping model
Prototype is defined as first or preliminary form using which other forms are copied or
derived.
Prototype model is a set of general objectives for software.
It does not identify the requirements like detailed input, output.
It is software working model of limited functionality.
In this model, working programs are quickly produced.
Evolutionary Models: Prototyping
The different phases of Prototyping model are:
1. Communication
In this phase, developer and customer meet and discuss the overall objectives of the software.
2. Quick design
Quick design is implemented when requirements are known.
It includes only the important aspects like input and output format of the software.
It focuses on those aspects which are visible to the user rather than the detailed plan.
It helps to construct a prototype.
3. Modeling quick design
This phase gives the clear idea about the development of software because the software
is now built.
It allows the developer to better understand the exact requirements.
4. Construction of prototype
The prototype is evaluated by the customer itself.
5. Deployment, delivery, feedback
If the user is not satisfied with current prototype then it refines according to the
requirements of the user.
The process of refining the prototype is repeated until all the requirements of users are
met.
When the users are satisfied with the developed prototype then the system is developed
on the basis of final prototype.
Advantages of Prototyping Model
Prototype model need not know the detailed input, output, processes, adaptability of
operating system and full machine interaction.
In the development process of this model users are actively involved.
The development process is the best platform to understand the system by the user.
Errors are detected much earlier.
Gives quick user feedback for better solutions.
It identifies the missing functionality easily. It also identifies the confusing or difficult
functions.
Disadvantages of Prototyping Model:
The client involvement is more and it is not always considered by the developer.
It is a slow process because it takes more time for development.
Many changes can disturb the rhythm of the development team.
It is a thrown away prototype when the users are confused with it.
Spiral Model
Spiral model is a combination of both, iterative model and one of the SDLC model. It can be
seen as if you choose one SDLC model and combine it with cyclic process (iterative model).
This model considers risk, which often goes un-noticed by most other models. The model
starts with determining objectives and constraints of the software at the start of one iteration.
Next phase is of prototyping the software. This includes risk analysis. Then one standard SDLC
model is used to build the software. In the fourth phase of the plan of next iteration is prepared.
Spiral model is a risk driven process model.
It is used for generating the software projects.
In spiral model, an alternate solution is provided if the risk is found in the risk analysis,
then alternate solutions are suggested and implemented.
It is a combination of prototype and sequential model or waterfall model.
In one iteration all activities are done, for large project's the output is small.
Evolutionary Models: The Spiral
STEPS OF SPIRAL MODEL
• It couples the iterative nature of prototyping with the controlled and systematic aspects of the
waterfall model and is a risk-driven process model generator that is used to guide multi-
stakeholder concurrent engineering of software intensive systems.
• Two main distinguishing features: one is cyclic approach for incrementally growing a system’s
degree of definition and implementation while decreasing its degree of risk. The other is a set of
anchor point milestones for ensuring stakeholder commitment to feasible and mutually
satisfactory system solutions.
• A series of evolutionary releases are delivered. During the early iterations, the release might be
a model or prototype. During later iterations, increasingly more complete version of the
engineered system are produced.
• The first circuit in the clockwise direction might result in the product specification; subsequent
passes around the spiral might be used to develop a prototype and then progressively more
sophisticated versions of the software. Each pass results in adjustments to the project plan. Cost
and schedule are adjusted based on feedback. Also, the number of iterations will be adjusted by
project manager.
• Good to develop large-scale system as software evolves as the process progresses and risk
should be understood and properly reacted to. Prototyping is used to reduce risk.
• However, it may be difficult to convince customers that it is controllable as it demands
considerable risk assessment expertise.
Advantages of Spiral Model
It reduces high amount of risk.
It is good for large and critical projects.
It gives strong approval and documentation control.
In spiral model, the software is produced early in the life cycle process.
Disadvantages of Spiral Model
It can be costly to develop a software model.
It is not used for small projects.
Three Concerns on Evolutionary Processes
• First concern is that prototyping poses a problem to project planning because of the uncertain
number of cycles required to construct the product.
• Second, it does not establish the maximum speed of the evolution. If the evolution occur too
fast, without a period of relaxation, it is certain that the process will fall into chaos. On the other
hand if the speed is too slow then productivity could be affected.
• Third, software processes should be focused on flexibility and extensibility rather than on high
quality. We should prioritize the speed of the development over zero defects. Extending the
development in order to reach high quality could result in a late delivery of the product when the
opportunity niche has disappeared.
3. The concurrent development model
The concurrent development model is called as concurrent model.
The communication activity has completed in the first iteration and exits in the awaiting
changes state.
The modeling activity completed its initial communication and then go to the
underdevelopment state.
If the customer specifies the change in the requirement, then the modeling activity moves
from the under development state into the awaiting change state.
The concurrent process model activities moving from one state to another state.
STEPS OF CONCURRENT MODEL
• Allow a software team to represent iterative and concurrent elements of any of the
process models. For example, the modeling activity defined for the spiral model is
accomplished by invoking one or more of the following actions: prototyping, analysis
and design.
• The Figure shows modeling may be in any one of the states at any given time. For
example, communication activity has completed its first iteration and in the awaiting
changes state. The modeling activity was in inactive state, now makes a transition into
the under development state. If customer indicates changes in requirements, the
modeling activity moves from the under development state into the awaiting changes
state.
• Concurrent modeling is applicable to all types of software development and provides an
accurate picture of the current state of a project. Rather than confining software
engineering activities, actions and tasks to a sequence of events, it defines a process
network. Each activity, action or task on the network exists simultaneously with other
activities, actions or tasks. Events generated at one point trigger transitions among the
states.
Advantages of the concurrent development model
This model is applicable to all types of software development processes.
It is easy for understanding and use.
It gives immediate feedback from testing.
It provides an accurate picture of the current state of a project.
Disadvantages of the concurrent development model
It needs better communication between the team members. This may not be achieved all
the time.
It requires to remember the status of the different activities.
SPECIALISED PROCESS MODELS
Special process models take many features from one or more conventional models. However
these special models tend to be applied when a narrowly defined software engineering approach
is chosen.
Types in Specialized process models:
1. Component based development (Promotes reusable components)
2. The formal methods model (Mathematical formal methods are backbone here)
3. Aspect oriented software development (Uses crosscutting technology)
Component based development
The component based development model incorporates many of the characteristics of the
spiral model. It is evolutionary in nature, demanding an iterative approach to the creation of
software. However, the model focuses on prepackaged software components. It promotes
software reusability.
Component based development Modeling and construction activities begin with the
identification of candidate components. Candidate components can be designed as either
conventional software modules or object oriented packages.
Component based development has the following steps:
1. Available component based products are researched and evaluated for the application domain.
2. Component integration issues are considered.
3. A software architecture is designed to accommodate the components.
4. Components are integrated into the architecture.
5. Comprehensive testing is conducted to ensure proper functionality.
Problems in Component based development
Component trustworthiness - how can a component with no available source code be trusted?
Component certification - who will certify the quality of components?
Requirements trade-offs - how do we do trade-off analysis between the features of one
component and another?
The formal methods model
The formal methods model encompasses a set of activities that leads to formal mathematical
specification of software. Formal methods enable a software engineer to specify, develop and
verify a computer system by applying rigorous mathematical notation.
When mathematical methods are used during software development, they provide a mechanism
for eliminating many of the problems that are difficult to overcome using other software
engineering paradigms.
Formal methods provide a basis for software verification and therefore enable software engineer
to discover and correct errors that might otherwise go undetected.
Problems in formal methods model
The development of formal models is currently time-consuming and expensive.
Because few developers have the necessary background knowledge to apply formal methods,
extensive training is required to others.
It is difficult to use this model as a communication mechanism for technically unsophisticated
people.
Aspect oriented Software Development
Aspect Oriented Software Development (AOSD) often referred to as aspect oriented
programming (AOP), a relatively new paradigm that provides process and methodology for
defining, specifying designing and constructing aspects.
It addresses limitations inherent in other approaches, including object-oriented programming.
AOSD aims to address crosscutting concerns by providing means for systematic identification,
separation, representation and composition.
This results in better support for modularization hence reducing development, maintenance and
evolution costs.
Rational Unified Process (RUP)
Rational Unified Process (RUP) is an object-oriented and Web-enabled program
development methodology. According to Rational (developers of Rational Rose and the Unified
Modeling Language), RUP is like an online mentor that provides guidelines, templates, and
examples for all aspects and stages of program development. RUP and similar products -- such
as Object-Oriented Software Process (OOSP), and the OPEN Process -- are comprehensive
software engineering tools that combine the procedural aspects of development (such as defined
stages, techniques, and practices) with other components of development (such as documents,
models, manuals, code, and so on) within a unifying framework.
RUP Stands for "Rational Unified Process." RUP is a software development process from
Rational, a division of IBM. It divides the development process into four distinct phases that
each involve business modeling, analysis and design, implementation, testing, and deployment.
The four phases are:
1. Inception - The idea for the project is stated. The development team determines if the
project is worth pursuing and what resources will be needed.
2. Elaboration - The project's architecture and required resources are further evaluated.
Developers consider possible applications of the software and costs associated with the
development.
3. Construction - The project is developed and completed. The software is designed,
written, and tested.
4. Transition - The software is released to the public. Final adjustments or updates are
made based on feedback from end users.
The RUP development methodology provides a structured way for companies to envision create
software programs. Since it provides a specific plan for each step of the development process, it
helps prevent resources from being wasted and reduces unexpected development costs.
The Unified Process (UP) is an attempt to draw on the best features and characteristics of
conventional software process models, but characterize them in a way that implements
many of the best principles of agile software development.
The Unified Process recognizes the importance of customer communication and
streamlined methods for describing the customer’s view of a system.
It helps the architect to focus on the right goals, such as understandability, reliance to
future changes, and reuse.
RUP establishes four phases of development, each of which is organized into a number of
separate iterations that must satisfy defined criteria before the next phase is undertaken:
in the inception phase, developers define the scope of the project and its business case; in
the elaboration phase, developers analyze the project's needs in greater detail and define
its architectural foundation; in the construction phase, developers create the application
design and source code; and in the transition phase, developers deliver the system to
users. RUP provides a prototype at the completion of each iteration. The product also
includes process support for Java 2 Enterprise Edition (J2EE) and BEA (WebLogic)
development, and supplies an HTML-based description of the unified process that an
organization can customize for its own use.
UNIT 2 REQUIREMENT ENGINEERING
Software engineering occurs as a consequence of a process called system engineering.
SYSTEM ENGINEERING
The overall objective of the system must be determined:
The role of hardware, software, people, database,procedures, and other system elements
must be identified.
Operational requirements must be elicited/extracted, analyzed, specified, modeled,
validated, and managed. These activities are the foundation of system engineering.
Business process engineering: The system engineering process is called business process
engineering when the context of the engineering work focuses on a business enterprise.
Product engineering: When a product is to be built, the process is called product engineering.
Both business process engineering and product engineering attempt to bring order to the
development of computer-based systems
COMPUTER-BASED SYSTEM
We define a computer-based system as A set or arrangement of elements that are organized to
accomplish some predefined goal by processing information.
To accomplish the goal, a computer-based system makes use of a variety of system elements.
Software: Computer programs, data structures, and related documentation that serve to effect
the logical method, procedure, or control that is required.
Hardware: Electronic devices that provide computing capability, the interconnectivity devices
(e.g., network switches, telecommunications devices) that enable the flow of data, and
electromechanical devices (e.g., sensors, motors, pumps) that provide external world function.
People: Users and operators of hardware and software.
Database: A large, organized collection of information that is accessed via software.
Documentation: Descriptive information (e.g., hardcopy manuals, on-line help files, Web
sites) that represents the use and/or operation of the system.
Procedures: The steps that define the specific use of each system element or the procedural
context in which the system resides.
SYSTEM ENGINEERING HIERARCHY
System Engineering encompasses a collection of top- down and methods to navigate the
hierarchy illustrated in figure 3.1. The system engineering process usually begins with a “world
view”. That is, the entire business or product domain is examined to ensure that the proper
business or technology context can be established. The world view is refined to focus more fully
on a specific domain of interest. Within a specific domain, the need for targeted system elements
is analysed. Finally, the analysis, design, and construction of a targeted system elements are
initiated. At the top of the hierarchy, a very broad context is established and, at the bottom,
detailed activities, performed by the relevant engineering disciplines are conducted.
The world view is refined to focus more fully on specific domain of interest. Within a
specific domain, the need for targeted system elements (e.g., data, software, hardware, people) is
analyzed.
The world view (WV) is composed of a set of domains (D1), which can each be a
system or system of systems in its own right
WV = {D1 D2 D3………,Dn }
Each domain is composed of specific elements (Ej) each of which serves some role in
accomplishing the objective and goals of the domain or component:
Di = {E1 E2 E3 …… Em}
Finally, each element is implemented by specifying the technical components (Ck) that achieve
the necessary function for an element:
Ej = {C1, C2 C3 …….. C k}
In the software context, a component could be a computer program, a reusable program
component, a module, a class or object, or even a programming language statement.
System Modeling
System modeling is an important element of the system engineering process. Whether the focus
is on the world view or the detailed view, the engineer creates models that:
• Define the process that serves the needs of the view under consideration.
• Represent the behavior of the processes and the assumption on which the behavior is based.
To construct a system model, the engineer should consider a number of restraining factors:
1. Assumption that reduce the number of possible permutations and variations, thus enabling a
model to reflect the problem in a reasonable manner.
2. Simplifications that the model has to create in a timely manner.
3. Limitations that help to bound the system.
4. Constraints that will guide the manner in which the model is created and the approach taken
when the model is implemented.
5. Preference that indicate the preferred architecture for all data, functions, and technology.
System Simulations
Many computer- based systems interact with the real world in a reactive fashion. That is, real-
world events are monitored by the hardware and software that from the computer- based system,
and based on these events; the system imposes control on the machine, process, and even people
who cause the events to occur. Real- time and embedded systems often fall into the reactive
systems category.
There are many systems in the reactive category- control machine and/or processes (e.g.,
commercial aircraft or petroleum refineries) that must operate with an extremely high degree of
reliability. If the system fails, significant economic or human loss could occur. For this reason
system modeling and simulation tools are used to help eliminate surprises when reactive
computer- based systems are built. These tools are applied during the system engineering
process, while the role of hardware and software, databases, and people is being specified.
Modeling and simulation tools enable a system engineer to “test drive” a specification of the
system.
System Simulation Tools:
Objective: System simulation tools provide the software engineer with the ability to predict the
behavior of a real-time system prior to the time that it is built. In addition, these tools enable the
software engineer to develop mock-ups of the real-time system, allowing the customer to gain
insight into the function, operation, and response prior to actual implementation.
Mechanics: Tools in this category allow a team to define the elements of a computer- based
system and then execute a variety of simulations to better understand the operating
characteristics and overall performance of the system. Two broad categories of system
simulation tools exist: (1) general purpose tools that can model virtually any computer- based
system, and (2) special purpose tools that are designed to address a specific application domain (
e.g., aircraft avionics systems, manufacturing systems, electronic- systems).
Representative Tools: CSIM, developed by Lockheed Martin Advanced Technology Labs
(www.atl.external.lmco.com), is a general purpose discrete- event simulation for block diagram-
oriented systems.
Simics, developed by Virtutech ( www.virtutech.com), is a system simulation platform that can
model and analyze both hardware and software- based systems.
A useful set of links to a wide array of systems simulation resources can be found at http://
www.idsia.ch/-andrea/simtools.html.
BUSINESS PROCESS ENGINEERING
The goal of business process engineering is to define architectures that will enable a business to
use information effectively. When taking a world view of a company’s information technology
needs, there is little doubt that system engineering is required. Not only is the specification of the
appropriate computing architecture required. But the software architecture that populates the
organization’s unique configuration of computing resources must be developed. Business
process engineering is one approach for creating an overall plan for implementing the computing
architecture.
The different architectures must be analyzed and designed within the context of business
objectives and goals:
• Data architecture
• Applications architecture
• Technology infrastructure
The data architecture provides a framework for the information needs of a business or
business function. The individual building blocks of the architecture are the data objects that are
used the business. A data object contains a set of attributes that define some aspect. Quality,
characteristic, or descriptor of the data that are being described.
Once set of data objects is defined, their relationships are identified. A relationship indicates
how objects are connected to one another. As an example, consider the objects: customer and
product A. The two objects can be connected by the relationship purchases; that is, a customer
purchases product A or product is purchased by customer. The data objects (there may be
hundreds or even thousands for a major business activity) flow between business functions, are
organized within a database, and are transformed to provide information that serves the needs of
the business.
The application architecture encompasses those elements of a system that transform objects
within the data architecture for some business purpose. In the context of this book, we consider
the application architecture to be the system of programs (software) that performs this
transformation. However, in a broader context, the application architecture might incorporate the
role of people (who are information transformers and users) and business procedures that have
not been automated.
The technology infrastructure provides the foundation for the data and application
architectures. The infrastructure encompasses the hardware and software that are used to support
the applications and data. This includes computers, operating systems, networks,
telecommunication links, storage technologies, and the architecture (e.g., client/ server) that has
been designed to important these technologies.
Requirement Engineering
Requirements engineering helps software engineers to better understand the problem they
will work to solve. It encompasses the set of tasks that lead to an understanding of what the
business impact of the software will be, what the customers want, and how end-users will
interact with this software.
Understanding the requirements of a problem is among the most difficult tasks that a
software engineer faces. When you first think about it, requirements engineering doesn’t seem
that hard. After all, doesn’t the customer know what is required? Shouldn’t the end-users have a
good understanding of the features and functions that will provide benefit? Surprisingly, in many
instances the answer to these questions is no. And even if customers and end-users are explicit in
their needs, those needs will change throughout the project. Requirements engineering is hard.
In the forward to a book by ralph young in effective requirements practices, it is written:
“It’s your worst nightmare. A customer walks into your office, sits down, looks you straight in
the eye, and says,” I know you think you understand what I said, but what you don’t understand
is what I said is not what I mean.” Invariably, this happens late in the project, after deadline
commitments have been made, reputations are on the line, and serious money is at stake.”
All of those who have worked in the systems and software business for more than few years have
lived this nightmare, and yet, few of them have learned to make it go away. We struggle when
we try to elicit requirements from our customers. We have trouble understanding the information
that we do acquire. We often record requirements in a disorganized manner, and we spend far too
little time verifying what we do record. We allow change to control us, rather than establishing
mechanisms to control change. In short, we fail to establish a solid foundation for the system or
software. Each of these problems is challenging. When they are combined, the outlook is
daunting for even the most experienced managers and practitioners. But solutions do exist.
It would be dishonest to call requirements engineering the “solution” to the challenges noted
above. But it does provide us with a solid approach for addressing these challenges.
What is it? Requirements engineering helps software engineers to better understand the problem
they will work to solve. It encompasses the set of tasks that lead to an understanding of what the
business impact of the software will be, what the customer wants, and how end-users will
interact with the software.
Who does it? Software engineers (sometimes referred to as system engineers or analysis in the
IT world) and other project stakeholders (managers, customers, end- users0 all participate in
requirements engineering.
Why is it important? Designing and building an elegant computer program that solves the
wrong problem serves no one’s needs. That’s why it is important to understand what the
customer wants before you begin to design and build a computer-based system.
What are the steps? Requirements engineering begins with inception--- a task that defines the
scope and nature of the problem to be solved. It moves onward to elicitation task that helps the
customer to define what is required, and then elaboration—where basic requirements are refined
and modified. As the customer defines the problem, negotiation occurs--- what are the priorities,
what is essential and when is it required? Finally, the problem is specified in some manner and
then reviewed or validated to ensure that your understanding of the problem and the customer’s
understanding of the problem coincide.
What is the work product? The intent of the requirements engineering process is to provide all
parties with a written understanding of the problem. This can be achieved through a number of
work products: user scenarios, functions and features lists, analysis models, or a specification.
How do I ensure that I’ve done it right? Requirements engineering work products are
reviewed with the customer and end-users to ensure that what you have learned is what they
really meant. A word of warning: even after all parties agree, things will change, and thy will
continue to change throughout the project.
A Bridge to Design and Construction:
Designing and building computer software is challenging, creative, and just plain fun. In fact,
building software is so compelling that many software developers want to jump right in before
they have a clear understanding of what is needed. They argue that things will become clear as
they build : that project stakeholders will be able to better understand need only after examining
early iterations of the software; that things change so rapidly that requirements engineering is a
waste of time; that the bottom line is producing a working program and that all else is secondary.
What makes these arguments seductive is that they contain elements of truth. But each is flawed,
and all can lead to a failed software project.
Fred Brooks: “The hardest single part of building a software is deciding what to build. No part
of the work so cripples the resulting system if done wrong. No other part is more difficult to
rectify later.”
Requirements engineering, like all other software engineering activities, must be adapted to the
needs of the process, the project, the product, and the people doing the work. From a software
process perspective, requirements engineering (RE) is a software engineering action that begins
during the communication activity and continues into the modeling activity.
In some cases an abbreviated approach may be chosen. In others, every task defined for
comprehensive requirements engineering must be performed rigorously. Overall, the software
team must adapt its approach to RE. But adaptation does not mean abandonment. It is essential
that the software team makes a real effort to understand the requirements of a problem before the
team attempts to solve the problem.
Requirements engineering builds a bridge towards design and construction. But where does the
bridge originate? One could argue that it begins at the feet of the project when user scenarios are
described, functions and features are delineated, and project constraints are identified. Others
might suggest that it begins with a broader system definition, where software is but one
component of the larger system domain. But regardless of starting point, the journey across the
bridge takes us high above the project, allowing the software team to examine the context of the
software work to be performed; the specific needs that design and construction must address; the
priorities that guide the order in which work is to completed; and the information, functions, and
behaviors that will have a profound impact on the resultant design.
Requirements Engineering Tasks:
Requirements engineering provides the appropriate mechanism for understanding what the
customer wants, analyzing need, assessing feasibility, negotiating a reasonable solution,
specifying the solution unambiguously, validating the specification, and managing the
requirements as they are transformed into an operational system. The requirements engineering
process is accomplished through the execution of seven distinct functions: inception, elicitation,
elaboration, negotiation, specification, validation, and management.
It is important to note that some of these requirements engineering functions occur in parallel
and all are adapted to the needs of the project. All strive to define what the customer wants, and
all serve to establish a solid foundation for the design and construction of what the customer
gets.
Inception: How does a software project gets started? Is there a single events that becomes the catalyst for a
new computer-based system or product, or does the needs evolve over time? There are no
definitive answers to these questions.
Capers jones: “The seeds of major software disasters are usually sown in the first three months
of commencing the software project.”
In some cases, a casual conversation is all that is needed to precipitate a major software
engineering effort. But in general, most projects begin when a business need is identified or a
potential new market or serve is discovered. Stakeholders from the business community (e.g.,
business managers, marketing people, and product managers) define a business case for the idea,
try to identify the breadth and depth of the market, do a rough feasibility analysis, and identify a
working description of the project’s scope.
All of this information is subject to change (a likely outcome), but it is sufficient to precipitate
discussions with the software engineering organization.
The intent is to establish a basic understanding of the problem, the people who want a solution,
the nature of the solution that is desired, and the effectiveness of preliminary communication and
collaboration between the customer and the developer.
Elicitation:
It certainly seems simple enough- ask the customer, the users, and others what the objectives for
the system or product are, what is to be accomplished, how the system or product fits into the
needs of the business, and finally, how the system or product is to be used on a day-to-day basis.
But it isn’t simple—it‘s very hard.
Christel and Kang identify a number of problems that help us understand why requirements
elicitation is difficult:
• Problems of scope: The boundary of the system is ill-defined or the customers/ users specify
unnecessary technical detail that may confuse, rather than clarity, overall system object ives.
• Problem of understanding: The customers/users are not completely sure of what is needed,
have a poor understanding of the capabilities and limitations of their computing environment,
don’t have a full understanding of the problem domain, have trouble communicating needs to the
system engineer, omit information that is believed to be “obvious” specify requirements that
conflict with the needs of other customers/ users, or specify requirements that are ambiguous or
untestable.
• Problems of volatility: The requirements change over time.
To help overcome these problems, requirements engineers must approach the requirement the
requirements gathering activity in an organized manner.
Elaboration:
The information obtained from the customer during inception and elicitation is expanded and
refined during elaboration. This requirements engineering activity focuses on developing a
refined technical model of software functions, features, and constraints.
Elaboration is an analysis modeling action that is composed of a number of modeling and
refinement tasks. Elaboration is driven by the creation and refinement of user scenarios that
describe how the end-user (and other actors) will interact with the system. Each user scenario is
parsed to extract analysis classes - business domain entities that are visible to the –user. The
attributes of each analysis class are defined and the services that are required by each class are
identified and a variety of supplementary UML diagrams are produced.
The end- result of elaboration is an analysis model that defines the informational, functional, and
behavioral domain of the problem.
SYSTEM MODELING
Because a system can be represented at different levels of abstraction (e.g., the world view, the
domain view, the element view), system models tend to be hierarchical or layered in nature. At
the top of the hierarchy, a model of the complete system is presented the world view). Major data
objects, processing functions, and behaviors are represented without regard to the system
component that will implement the elements of world view model. As thee hierarchy is refined
or layered, component-level detail (in this case, representations of hardware, software, and so on)
is modeled. Finally system models evolve into engineering models (which are further refined that
are specific to the appropriate engineering discipline.)
System Modeling tools:
Objective: System modeling tools provide the software engineer with the ability to model all
elements of a computer-based system using a notation that is specific to the tool.
Mechanics: Tool mechanics vary. In general, tools in this category enable a system engineer to
model (1) the structure of all functional elements of the system; (2) the static and dynamic
behavior of the system; and (3) the human-machine interface.
Representative Tools: Describe, developed by Embarcadero Technologies
(www.embarcadero.com), is a suite UML-based modeling tools that can represent software or
complete systems.
Rational XDE and Rose, developed by Rational Technologies ( www.rational.com), provides a
widely used, UML- based suite of modeling and development tools for computer–based systems.
Real-Time studio, developed by artisan Software ( www.artisansw.com), is a suite of modeling
and development tools that support real-time system development.
Telelogic tau, developed by telelogic (www.tlelogic.com), is a UML-based tools suite that
support analysis and design modeling as well as links to software construction features.
System Modeling with UML: UML provides a wide array of diagrams that can be used for
analysis and design at both the system and the software level. For the CLASS system, four
important system elements are modeled: (1) the hardware that enables CLASS; (2) the software
that implements database access and sorting; (3) the operator who submits various requests to the
system; and (4) the database that contains relevant bar code and destination information.
Another UML notation that can be used to model software is the class diagram (along with many
class-related diagrams discussed later in this book) at the system engineering level, classes are
extracted statement of the problem. For the CLASS , candidate classes might be : Box, Cover
Line Bar-code Reader, Shunt Controller, Operator Request , Report, Product, and others. Each
class encapsulates a set of attributes that depict all necessary information about the class. A class
description also contains a set of operations that is applied to the class in the context of the
CLASS system.
The use-case diagram illustrates the manner in which an actor (in this case, the operator,
represented by a stick figure) interacts with the system. Each labeled oval inside the box (which
represents the CLASS system boundary) represents one use-case- a text scenario that describes
an interaction with the system.
DESIGN CONCEPTS
Fundamental software design concept provides the necessary framework for getting the product
in a right way.
A) Abstraction The different levels of abstraction are procedural data abstractions. A procedural abstraction
refers to a sequence of instructions that have a specific and limited function. The name of
procedural abstraction implies these functions, but specific details are suppressed.
B) Architecture
Software architecture includes the overall structure of the software and the ways in which that
structure provides conceptual integrity for a system.” A software architecture is the development
work product that gives the highest return on investment with respect to quality, schedule, and
cost.”
C) Patterns
Design Patterns describe a design structure that solves a particular design problem within a
specific context and amid “forces” that may have an impact of the manner in which the pattern is
applied and used.
D) Modularity Modularity is the single attribute of software that allows a program to be intellectually
manageable.
E) Information Hiding Modules should be specified and designed so that information (algorithms and data) contained
within a module is inaccessible to other modules that have no need for such information. This is
called information hiding.
F) Functional Independence
Functional independence is achieved by developing modules with “single minded” function and
an “aversion” to excessive interaction with other modules. Stated another way, we want to design
software so that each module addresses a specific sub function of requirements and has a simple
interface when viewed from other parts of the program structure.
G) Refinement
Stepwise refinement is a top –down design strategy that is defined as a program is developed by
successively refining levels of procedural detail. A hierarchy is developed by decomposing a
macroscopic statement of function (a procedural abstraction) in a stepwise fashion until
programming language statements are reached.
H) Refactoring
Refactoring is the process of changing a software system in such a way that it does not alter the
external behavior of the code [design] yet improves its internal structure.
The Design Model
The design model can be viewed in two different dimensions as process and abstraction. The
process dimension indicates the evolution of thee design tasks are executed as part of the
software process. The abstraction dimension represents the level of detail as each element of the
analysis model is transformed in to a design equivalent and then refined iteratively.
a) Data Design Elements
Data design creates a model of data and/or information that is represented at a high level of
abstraction (the customer/ user‘s view of data). This data model is then represented into
progressively more implementation- specific representations that can be processed by computer-
based system.
b) Architectural Design Elements
Architectural Design Elements give us an overall view of software. The architecture model is
derived from three sources (1) information about the application domain for the software to be
built; (2) specific analysis model elements such as data flow diagrams or analysis, their
relationship and collaboration for the problem at hand, and (3) the availability of architectural
patterns.
c) Component-level Design Elements The component- level design defines data structures for all local data objects and objects and
algorithmic details for all processing that occurs within a component and an interface that allows
access to all component operations (behaviors).
Summary
• System modeling is one of the important elements in the system engineering process.
• Modeling and simulation tools enable system engineering to test drive a specification or the
system.
• The intent of object oriented analysis is to define all classes that are relevant to the problem to
be solved.
• The various levels of data flow diagrams determine thee efficient analysis of flow oriented
modeling.
• In design model, process dimension indicates the evolution of the design model whereas
abstraction dimension represents the level of detail as each element of the analysis model is
transformed in to a design equivalent and then refined iteratively.
Introduction to requirement engineering
The process of collecting the software requirement from the client then understand, evaluate
and document it is called as requirement engineering.
Requirement engineering constructs a bridge for design and construction.
Requirement engineering consists of seven different tasks as follow:
1. Inception
Inception is a task where the requirement engineering asks a set of questions to establish a
software process.
In this task, it understands the problem and evaluates with the proper solution.
It collaborates with the relationship between the customer and the developer.
The developer and customer decide the overall scope and the nature of the question.
2. Elicitation
Elicitation means to find the requirements from anybody.
The requirements are difficult because the following problems occur in elicitation.
Problem of scope: The customer give the unnecessary technical detail rather than clarity of the
overall system objective.
Problem of understanding: Poor understanding between the customer and the developer
regarding various aspect of the project like capability, limitation of the computing environment.
Problem of volatility: In this problem, the requirements change from time to time and it is
difficult while developing the project.
3. Elaboration
In this task, the information taken from user during inception and elaboration and are
expanded and refined in elaboration.
Its main task is developing pure model of software using functions, feature and constraints
of a software.
4. Negotiation
In negotiation task, a software engineer decides the how will the project be achieved with
limited business resources.
To create rough guesses of development and access the impact of the requirement on the
project cost and delivery time.
5. Specification
In this task, the requirement engineer constructs a final work product.
The work product is in the form of software requirement specification.
In this task, formalize the requirement of the proposed software such as informative,
functional and behavioral.
The requirement are formalize in both graphical and textual formats.
6. Validation
The work product is built as an output of the requirement engineering and that is
accessed for the quality through a validation step.
The formal technical reviews from the software engineer, customer and other
stakeholders helps for the primary requirements validation mechanism.
7. Requirement management
It is a set of activities that help the project team to identify, control and track the
requirements and changes can be made to the requirements at any time of the ongoing
project.
These tasks start with the identification and assign a unique identifier to each of the
requirement.
After finalizing the requirement traceability table is developed.
The examples of traceability table are the features, sources, dependencies, subsystems
and interface of the requirement.
Eliciting Requirements
Eliciting requirement helps the user for collecting the requirement
Eliciting requirement steps are as follows:
1. Collaborative requirements gathering
Gathering the requirements by conducting the meetings between developer and
customer.
Fix the rules for preparation and participation.
The main motive is to identify the problem, give the solutions for the elements,
negotiate the different approaches and specify the primary set of solution requirements
in an environment which is valuable for achieving goal.
2. Quality Function Deployment (QFD)
In this technique, translate the customer need into the technical requirement for the
software.
QFD system designs a software according to the demands of the customer.
QFD consist of three types of requirement:
Normal requirements
The objective and goal are stated for the system through the meetings with the
customer.
For the customer satisfaction these requirements should be there.
Expected requirement
These requirements are implicit.
These are the basic requirement that not be clearly told by the customer, but also the
customer expect that requirement.
Exciting requirements
These features are beyond the expectation of the customer.
The developer adds some additional features or unexpected feature into the software to
make the customer more satisfied.
For example, the mobile phone with standard features, but the developer adds few
additional functionalities like voice searching, multi-touch screen etc. then the customer
more exited about that feature.
3. Usage scenarios
Till the software team does not understand how the features and function are used by the
end users it is difficult to move technical activities.
To achieve above problem the software team produces a set of structure that identify the
usage for the software.
This structure is called as 'Use Cases'.
4. Elicitation work product
The work product created as a result of requirement elicitation that is depending on the size
of the system or product to be built.
The work product consists of a statement need, feasibility, statement scope for the system.
It also consists of a list of users participate in the requirement elicitation.
UCs not only document requirements, as their form is like storytelling and uses text, both of
which are easy and natural with different stakeholders, they also are a good medium for
discussion and brainstorming. Hence, UCs can also be used for requirements elicitation and
problem analysis. While developing use cases, informal or formal models may also be built,
though they are not required.
UCs can be evolved in a stepwise refinement manner with each step adding more details. This
approach allows UCs to be presented at different levels of abstraction. Though any number of
levels of abstraction are possible, four natural levels emerge:
Actors and goals. The actor-goal list enumerates the use cases and specifies the actors
for each goal. (The name of the use case is generally the goal.) This table may be
extended by giving a brief description of each of the use cases. At this level, the use cases
together specify the scope of the system and give an overall view of what it does.
Completeness of functionality can be assessed fairly well by reviewing these.
Main success scenarios. For each of the use cases, the main success scenarios are
provided at this level. With the main scenarios, the system behavior for each use case is
specified. This description can be reviewed to ensure that interests of all the stakeholders
are met and that the use case is delivering the desired behavior.
Failure conditions. Once the success scenario is listed, all the possible failure conditions
can be identified. At this level, for each step in the main success scenario, the different
ways in which a step can fail form the failure conditions. Before deciding what should be
done in these failure conditions (which is done at the next level), it is better to enumerate
the failure conditions and review for completeness.
Failure handling. This is perhaps the most tricky and difficult part of writing a use case.
Often the focus is so much on the main functionality that people do not pay attention to
how failures should be handled. Determining what should be the behavior under different
failure conditions will often identify new business rules or new actors.
The different levels can be used for different purposes. For discussion on overall functionality or
capabilities of the system, actors and goal-level description is very useful. Failure conditions, on
the other hand, are very useful for understanding and extracting detailed requirements and
business rules under special cases.
These four levels can also guide the analysis activity. A step-by-step approach for analysis when
employing use cases is:
Step 1. Identify the actors and their goals and get an agreement with the concerned stakeholders
as to the goals. The actor-goal list will clearly define the scope of the system and will provide an
overall view of what the system capabilities are.
Step 2. Understand and specify the main success scenario for each UC, giving more details about
the main functions of the system. Interaction and discussion are the primary means to uncover
these scenarios though models may be built, if required. During this step, the analyst may
uncover that to complete some use case some other use cases are needed, which have not been
identified. In this case, the list of use cases will be expanded.
Step 3. When the main success scenario for a use case is agreed upon and the main steps in its
execution are specified, then the failure conditions can be examined. Enumerating failure
conditions is an excellent method of uncovering special situations that can occur and which must
be handled by the system.
Step 4. Finally, specify what should be done for these failure conditions. As details of handling
failure scenarios can require a lot of effort and discussion, it is better to first enumerate the
different failure conditions and then get the details of these scenarios. Very often, when deciding
the failure scenarios, many new business rules of how to deal with these scenarios are uncovered.
Though we have explained the basic steps in developing use cases, at any step an analyst
may have to go back to earlier steps as during some detailed analysis new actors may emerge or
new goals and new use cases may be uncovered. That is, using use cases for analysis is also an
interactive task.
What should be the level of detail in a use case? There is no one answer to a question like
this; the actual answer always depends on the project and the situation. So it is with use cases.
Generally it is good to have sufficient details which are not overwhelming but are sufficient to
build the system and meet its quality goals. For example, if there is a small collocated team
building the system, it is quite likely that use cases which list the main exception conditions and
give a few key steps for the scenarios will suffice. On the other hand, for a project whose
development is to be subcontracted to some other organization, it is better to have more detailed
use cases.
For writing use cases, general technical writing rules apply. Use simple grammar, clearly
specify who is performing the step, and keep the overall scenario as simple as possible. Also,
when writing steps, for simplicity, it is better to combine some steps into one logical step, if it
makes sense. For example, steps “user enters his name,” “user enters his SSN,” and “user enters
his address” can be easily combined into one step “user enters personal information.”
Analysis model operates as a link between the 'system description' and the 'design model'.
In the analysis model, information, functions and the behaviour of the system is defined and
these are translated into the architecture, interface and component level design in the 'design
modeling'.
Elements of the analysis model
1. Scenario based element
This type of element represents the system user point of view.
Scenario based elements are use case diagram, user stories.
2. Class based elements
The object of this type of element manipulated by the system.
It defines the object,attributes and relationship.
The collaboration is occurring between the classes.
Class based elements are the class diagram, collaboration diagram.
3. Behavioral elements
Behavioral elements represent state of the system and how it is changed by the external
events.
The behavioral elements are sequenced diagram, state diagram.
4. Flow oriented elements
An information flows through a computer-based system it gets transformed.
It shows how the data objects are transformed while they flow between the various system
functions.
The flow elements are data flow diagram, control flow diagram.
Analysis Rules of Thumb
The rules of thumb that must be followed while creating the analysis model.
The rules are as follows:
The model focuses on the requirements in the business domain. The level of abstraction
must be high i.e there is no need to give details.
Every element in the model helps in understanding the software requirement and focus on
the information, function and behaviour of the system.
The consideration of infrastructure and nonfunctional model delayed in the design.
For example, the database is required for a system, but the classes, functions and behavior of
the database are not initially required. If these are initially considered then there is a delay in the
designing.
Throughout the system minimum coupling is required. The interconnections between the
modules is known as 'coupling'.
The analysis model gives value to all the people related to model.
The model should be simple as possible. Because simple model always helps in easy
understanding of the requirement.
Concepts of data modeling
Analysis modeling starts with the data modeling.
The software engineer defines all the data object that proceeds within the system and the
relationship between data objects are identified.
Data objects
The data object is the representation of composite information.
The composite information means an object has a number of different properties or
attribute.
For example, Height is a single value so it is not a valid data object, but dimensions contain
the height, the width and depth these are defined as an object.
Data Attributes
Each of the data object has a set of attributes.
Data object has the following characteristics:
Name an instance of the data object.
Describe the instance.
Make reference to another instance in another table.
Relationship
Relationship shows the relationship between data objects and how they are related to each other.
Cardinality
Cardinality state the number of events of one object related to the number of events of another
object.
The cardinality expressed as:
One to one (1:1)
One event of an object is related to one event of another object.
For example, one employee has only one ID.
One to many (1:N)
One event of an object is related to many events.
For example, One collage has many departments.
Many to many(M:N)
Many events of one object are related to many events of another object.
For example, many customer place order for many products.
Modality If an event relationship is an optional then the modality of relationship is zero.
If an event of relationship is compulsory then modality of relationship is one.
Requirements Negotiation
To negotiate the requirements of a system to be developed, it is necessary to identify conflicts
and to resolve those conflicts. This is done by means of systematic conflict management. The
conflict management in requirements engineering comprises the following four tasks:
[Four tasks of conflict management] Conflict identification
Conflict analysis
Conflict resolution
Documentation of the conflict resolution
These four tasks are explained in the following sections.
Conflict Identification
Conflicts can arise during all requirements engineering activities. For example, different
stakeholders can utter contradicting requirements during elicitation.
Requirements Validation
It’s a process of ensuring the specified requirements meet the customer needs. It’s concerned
with finding problems with the requirements.
These problems can lead to extensive rework costs when these they are discovered in the later
stages, or after the system is in service.
The cost of fixing a requirements problem by making a system change is usually much greater
than repairing design or code errors. Because a change to the requirements usually means
the design and implementation must also be changed, and re-tested.
During the requirements validation process, different types of checks should be carried out on the
requirements. These checks include:
1. Validity checks: The functions proposed by stakeholders should be aligned with what the
system needs to perform. You may find later that there are additional or different functions
are required instead.
2. Consistency checks: Requirements in the document shouldn’t conflict or different
description of the same function
3. Completeness checks: The document should include all the requirements and constrains.
4. Realism checks: Ensure the requirements can actually be implemented using the
knowledge of existing technology, the budget, schedule, etc.
5. Verifiability: Requirements should be written so that they can be tested. This means you
should be able to write a set of tests that demonstrate that
6. Consistency checks: Requirements in the document shouldn’t conflict or different
description of the same function
7. Completeness checks: The document should include all the requirements and constrains.
8. Realism checks: Ensure the requirements can actually be implemented using the
knowledge of existing technology, the budget, schedule, etc.
9. Verifiability: Requirements should be written so that they can be tested. This means you
should be able to write a set of tests that demonstrate that the system meets the specified
requirements.
UNIT 5 TESTING STRATEGIES
A software or QA strategy is an outline describing the software development cycle testing
approach. The strategies describe ways of mitigating product risks of stakeholders in the test
level, the kind of testing to be performed and which entry and exit criteria would apply.
A software testing strategy is an outline which describes the software development cycle
testing approach. It is made to inform testers, project managers and developers on some major
issues of the testing process. This includes testing objective, total time and resources needed for
a project, methods of testing new functionalities and the testing environment.
Software testing or Quality Assurance strategies describe how to mitigate product risks of
stakeholders at the test level, which kinds of testing are to be done and which entry and exit
criteria will apply. They’re made based on development design documents.
FACTORS TO CONSIDER IN CHOOSING SOFTWARE TESTING STRATEGIES
1. RISKS.
Risk management is paramount during testing, thus consider the risks and the risk level. For an
app that is well-established that’s slowly evolving, regression is a critical risk. That is why
regression-averse strategies make a lot of sense. For a new app, a risk analysis could reveal
various risks if choosing a risk-based analytical strategy.
2. OBJECTIVES.
Testing should satisfy the requirements and needs of stakeholders to succeed. If the objective is
to look for as many defects as possible with less up-front time and effort invested, a dynamic
strategy makes sense.
3. SKILLS.
Take into consideration which skills the testers possess and lack, since strategies should not only
be chosen but executed as well. A standard compliant strategy is a smart option when lacking
skills and time in the team to create an approach.
4. PRODUCT.
Some products such as contract development software and weapons systems tend to have
requirements that are well-specified. This could lead to synergy with an analytical strategy that is
requirements-based.
5. BUSINESS.
Business considerations and strategy are often important. If using a legacy system as a model for
a new one, one could use a model-based strategy.
6. REGULATIONS.
At some instances, one may not only have to satisfy stakeholders, but regulators as well.
In this case, one may require a methodical strategy which satisfies these regulators.
You must choose testing strategies with an eye towards the factors mentioned earlier, the
schedule, budget, and feature constraints of the project and the realities of the organization and
its politics.
STRATEGIES IN SOFTWARE TESTING
A healthy software testing or QA strategy requires tests at all technology stack levels to ensure
that every part, as well as the entire system, works correctly.
1. Leave time for fixing. Setting aside time for testing is pointless if there is no time set aside
for fixing. Once problems are discovered, developers required time to fix them and the company
needs time to retest the fixes as well. With a time and plan for both, then testing is not very
useful.
2. Discourage passing the buck. The same way that testers could fall short in their reports,
developers could also fall short in their effort to comprehend the reports. One way of minimizing
back and forth conversations between developers and testers is having a culture that will
encourage them to hop on the phone or have desk-side chat to get to the bottom of things.
Testing and fixing are all about collaboration. Although it is important that developers should not
waste time on a wild goose chase, it is equally important that bugs are not just shuffled back and
forth.
3. Manual testing has to be exploratory. A lot of teams prefer to script manual testing so
testers follow a set of steps and work their way through a set of tasks that are predefined for
software testing. This misses the point of manual testing. If something could be written down or
scripted in exact terms, it could be automated and belongs in the automated test suite. Real-world
use of the software will not be scripted, thus testers must be free to probe and break things
without a script.
4. Encourage clarity. Reporting bugs and asking for more information could create unnecessary
overhead costs. A good bug report could save time through avoiding miscommunication or a
need for more communication. In the same way, a bad bug report could lead to a fast dismissal
by a developer. These could create problems. Anyone reporting bugs should make it a point to
create bug reports that are informative. However, it is also integral for a developer to out of the
way to effectively communicate as well.
5. Test often. The same as all other forms of testing, manual testing will work best when it
occurs often throughout the development project, in general, weekly or bi-weekly. This helps in
preventing huge backlogs of problems from building up and crushing morale. Frequent testing is
considered the best approach.
Testing and fixing software could be tricky, subtle and political even. Nevertheless, as long as
one is able to anticipate and recognize common issues, things could be kept running smoothly.
Strategies of Software Testing
Introduction to Testing
Testing is a set of activities which are decided in advance i.e before the start of development
and organized systematically.
In the literature of software engineering various testing strategies to implement the testing
are defined.
All the strategies give a testing template.
Following are the characteristic that process the testing templates:
The developer should conduct the successful technical reviews to perform the testing
successful.
Testing starts with the component level and work from outside toward the integration of the
whole computer based system.
Different testing techniques are suitable at different point in time.
Testing is organized by the developer of the software and by an independent test group.
Debugging and testing are different activities, then also the debugging should be
accommodated in any strategy of testing.
Difference between Verification and Validation
Verification Validation
Verification is the process to find whether the
software meets the specified requirements for
particular phase.
The validation process is checked whether the
software meets requirements and expectation of
the customer.
It estimates an intermediate product. It estimates the final product.
The objectives of verification is to check whether
software is constructed according to requirement and
design specification.
The objectives of the validation is to check
whether the specifications are correct and
satisfy the business need.
It describes whether the outputs are as per the inputs
or not.
It explains whether they are accepted by the
user or not.
Verification is done before the validation. It is done after the verification.
Plans, requirement, specification, code are evaluated
during the verifications.
Actual product or software is tested under
validation.
It manually checks the files and document. It is a computer software or developed program
based checking of files and document.
Strategy of testing
A strategy of software testing is shown in the context of spiral.
Following figure shows the testing strategy:
Unit testing
Unit testing starts at the centre and each unit is implemented in source code.
Integration testing
An integration testing focuses on the construction and design of the software.
Validation testing
Check all the requirements like functional, behavioral and performance requirement are validate
against the construction software.
System testing
System testing confirms all system elements and performance are tested entirely.
Testing strategy for procedural point of view
As per the procedural point of view the testing includes following steps.
1) Unit testing
2) Integration testing
3) High-order tests
4) Validation testing
These steps are shown in following figure:
Following are the issues considered to implement software testing strategies.
Specify the requirement before testing starts in a quantifiable manner.
According to the categories of the user generate profiles for each category of user.
Produce a robust software and it's designed to test itself.
Should use the Formal Technical Reviews (FTR) for the effective testing.
To access the test strategy and test cases FTR should be conducted.
To improve the quality level of testing generate test plans from the users feedback.
Test strategies for conventional software
Following are the four strategies for conventional software:
1) Unit testing
2) Integration testing
3) Regression testing
4) Smoke testing
1) Unit testing
Unit testing focus on the smallest unit of software design, i.e module or software
component.
Test strategy conducted on each module interface to access the flow of input and output.
The local data structure is accessible to verify integrity during execution.
Boundary conditions are tested.
In which all error handling paths are tested.
An Independent path is tested.
Following figure shows the unit testing:
Unit test environment
The unit test environment is as shown in following figure:
Difference between stub and driver
Stub Driver
Stub is considered as subprogram. It is a simple main program.
Stub does not accept test case data. Driver accepts test case data.
It replace the modules of the program into
subprograms and are tested by the next driver.
Pass the data to the tested components
and print the returned result.
2) Integration testing
Integration testing is used for the construction of software architecture.
There are two approaches of incremental testing are:
i) Non incremental integration testing
ii) Incremental integration testing
i) Non incremental integration testing
Combines all the components in advanced.
A set of error is occurred then the correction is difficult because isolation cause is complex.
ii) Incremental integration testing
The programs are built and tested in small increments.
The errors are easier to correct and isolate.
Interfaces are fully tested and applied for a systematic test approach to it.
Following are the incremental integration strategies:
a. Top-down integration
b. Bottom-up integration
Top-down integration
It is an incremental approach for building the software architecture.
It starts with the main control module or program.
Modules are merged by moving downward through the control hierarchy.
Following figure shows the top down integration.
Problems with top-down approach of testing
Following are the problems associated with top-down approach of testing as follows:
Top-down approach is an incremental integration testing approach in which the test
conditions are difficult to create.
A set of errors occur, then correction is difficult to make due to the isolation of cause.
The programs are expanded into various modules due to the complications.
If the previous errors are corrected, then new get created and the process continues. This
situation is like an infinite loop.
b. Bottom-up integration
In bottom up integration testing the components are combined from the lowest level in the
program structure.
The bottom-up integration is implemented in following steps:
The low level components are merged into clusters which perform a specific software sub
function.
A control program for testing(driver) coordinate test case input and output.
After these steps are tested in cluster.
The driver is removed and clusters are merged by moving upward on the program structure.
Following figure shows the bottom up integration:
3) Regression testing
In regression testing the software architecture changes every time when a new module is
added as part of integration testing.
4) smoke testing
The developed software component are translated into code and merge to complete the
product.
Difference between Regression and smoke testing
Regression testing Smoke testing
Regression testing is used to check defects generated to
other modules by making the changes in existing
programs.
At the time of developing a software product
smoke testing is used.
In regression tested components are tested again to
verify the errors.
It permit the software development team to test
projects on a regular basis.
Regression testing needs extra manpower because the
cost of the project increases.
Smoke testing does not need an extra
manpower because it does not affect the cost of
project.
Testers conduct the regression testing. Developer conducts smoke testing just before
releasing the product.
Alpha and Beta testing
Alpha testing Beta testing
Alpha testing is executed at developers end by the
customer.
Beta testing is executed at end-user sites in
the absence of a developer.
It handles the software project and applications. It usually handles software product.
It is not open to market and the public. It is always open to the market and the public.
Alpha testing does not have any different name. Beta testing is also known as the field testing.
Alpha testing is not able to test the errors because the
developer does not known the type of user.
In beta testing, the developer corrects the
errors as users report the problems.
In alpha testing, developer modifies the codes before
release the software without user feedback.
In beta testing, developer modifies the code
after getting the feedback from user.
Approaches of Software Testing
System testing
System testing is known as the testing behavior of the system or software according to the
software requirement specification.
It is a series of various tests.
It allows to test, verify and validate the business requirement and application architecture.
The primary motive of the tests is entirely to test the computer-based system.
Following are the system tests for software-based system
1. Recovery testing
To check the recovery of the software, force the software to fail in various ways.
Reinitialization, check pointing mechanism, data recovery and restart are evaluated
correctness.
2. Security testing
It checks the system protection mechanism and secure improper penetration.
3. Stress testing
System executes in a way which demands resources in abnormal quantity, frequency.
A variation of stress testing is known as sensitivity testing.
4. Performance testing
Performance testing is designed to test run-time performance of the system in the context of
an integrated system.
It always combines with the stress testing and needs both hardware and software
requirements.
5. Deployment testing
It is also known as configuration testing.
The software works in each environment in which it is to be operated.
Debugging process
Debugging process is not a testing process, but it is the result of testing.
This process starts with the test cases.
The debugging process gives two results, i.e the cause is found and corrected second is the
cause is not found.
Debugging Strategies
Debugging identifies the correct cause of error.
Following are the debugging strategies:
1. Brute force
Brute force is commonly used and least efficient method for separating the cause of
software error.
This method is applied when all else fails.
2. Backtracking
Backtracking is successfully used in small programs.
The source code is traced manually till the cause is found.
3. Cause elimination
Cause elimination establishes the concept of binary partitioning.
It indicates the use of induction or deduction.
The data related to the error occurrence is arranged in separate potential cause.
Characteristics of testability
Following are the characteristics of testability:
1. Operability
If a better quality system is designed and implemented then it easier to test.
2. Observability
It is an ability to see which type of data is being tested.
Using observability it will easily identify the incorrect output.
Catch and report the internal errors automatically.
3. Controllability
If the users controlled the software properly then the testing is automated and optimized better.
4. Decomposability
The software system is constructed from independent module then tested independently.
5. Simplicity
The programs must display the functional, structural, code simplicity so that programs are
easier to test.
6. Stability
Changes are rare during the testing and do not disprove existing tests.
7. Understandability
The architectural designs are well understood.
The technical documentation is quickly accessible, organized and accurate.
Attributes of 'good' test
The possibility of finding an error is high in good test.
Limited testing time and resources. There is no purpose to manage same test as another test.
A test should be used for the highest probability of uncovering the errors of a complete class.
The test must be executed separately and it should not be too simple nor too complex.
Basic Path Testing
Basic path testing is proposed by Tom McCabe.
It is a white box testing technique.
It allows the test case designer to obtain a logical complexity measure of a procedural design.
This measure guides for defining a basic set of execution paths.
The examples of basic path testing are as follows:
Flow graph notation
Independent program paths
Deriving test cases
Graph matrices
What is Path Testing?
Path testing is a structural testing method that involves using the source code of a
program in order to find every possible executable path. It helps to determine all faults lying
within a piece of code. This method is designed to execute all or selected path through a
computer program.
Any software program includes, multiple entry and exit points. Testing each of these
points is a challenging as well as time-consuming. In order to reduce the redundant tests and to
achieve maximum test coverage, basis path testing is used.
What is Basis Path Testing?
The basis path testing is same, but it is based on a White Box Testing method, that defines test
cases based on the flows or logical path that can be taken through the program. In software
engineering, Basis path testing involves execution of all possible blocks in a program and
achieves maximum path coverage with the least number of test cases. It is a hybrid of branch
testing and path testing methods.
The objective behind basis path in software testing is that it defines the number of independent
paths, thus the number of test cases needed can be defined explicitly (maximizes the coverage of
each test case).
Here we will take a simple example, to get a better idea what is basis path testing include
In the above example, we can see there are few conditional statements that is executed depending
on what condition it suffice. Here there are 3 paths or condition that need to be tested to get the
output,
Path 1: 1,2,3,5,6, 7
Path 2: 1,2,4,5,6, 7
Path 3: 1, 6, 7
Steps for Basis Path testing
The basic steps involved in basis path testing include
Draw a control graph (to determine different program paths)
Calculate Cyclomatic complexity (metrics to determine the number of independent paths)
Find a basis set of paths
Generate test cases to exercise each path
Advantages of Basic Path Testing
It helps to reduce the redundant tests
It focuses attention on program logic
It helps facilitates analytical versus arbitrary case design
Test cases which exercise basis set will execute every statement in a program at least
once
Conclusion:
Basis path testing helps to determine all faults lying within a piece of code.
Control structure testing
The control structure testing is a wide testing study and also improves the quality of
white-box testing.
The examples of white-box testing is as follows:
1) Conditional testing
All the program module and the logical conditions in the program are tested in conditional
testing.
2) Data flow testing
A test path of a program is selected as per the location of definitions and the uses of variables in
the program.
3) Loop testing
It is a white-box testing technique. Loop testing mainly focuses on the validity of the loop
constructs.
Four different classes of loops are:
1. Simple loops
2. Concatenated loops
3. Nested loops
4. Unstructured loops
What is Structural Testing ?
Structural testing, also known as glass box testing or white box testing is an approach where the
tests are derived from the knowledge of the software's structure or internal implementation.
The other names of structural testing includes clear box testing, open box testing, logic driven
testing or path driven testing.
Structural Testing Techniques:
Statement Coverage - This technique is aimed at exercising all programming statements
with minimal tests.
Branch Coverage - This technique is running a series of tests to ensure that all branches are
tested at least once.
Path Coverage - This technique corresponds to testing all possible paths which means that
each statement and branch are covered.
Advantages of Structural Testing:
Forces test developer to reason carefully about implementation
Reveals errors in "hidden" code
Spots the Dead Code or other issues with respect to best programming practices.
Disadvantages of Structural Box Testing:
Expensive as one has to spend both time and money to perform white box testing.
Every possibility that few lines of code is missed accidentally.
Indepth knowledge about the programming language is necessary to perform white box
testing.
System Testing
System testing tests the system as a whole. Once all the components are integrated, the
application as a whole is tested rigorously to see that it meets the specified Quality Standards.
This type of testing is performed by a specialized testing team.
System testing is important because of the following reasons −
System testing is the first step in the Software Development Life Cycle, where the
application is tested as a whole.
The application is tested thoroughly to verify that it meets the functional and technical
specifications.
The application is tested in an environment that is very close to the production
environment where the application will be deployed.
System testing enables us to test, verify, and validate both the business requirements as
well as the application architecture.
Black-Box Testing
The technique of testing without having any knowledge of the interior workings of the
application is called black-box testing. The tester is oblivious to the system architecture and
does not have access to the source code. Typically, while performing a black-box test, a tester
will interact with the system's user interface by providing inputs and examining outputs without
knowing how and where the inputs are worked upon.
The following table lists the advantages and disadvantages of black-box testing.
Advantages Disadvantages
Well suited and efficient for large code segments. Limited coverage, since only a
selected number of test scenarios is
actually performed.
Code access is not required. Inefficient testing, due to the fact
that the tester only has limited
knowledge about an application.
Clearly separates user's perspective from the
developer's perspective through visibly defined
roles.
Blind coverage, since the tester
cannot target specific code segments
or errorprone areas.
Large numbers of moderately skilled testers can test
the application with no knowledge of
implementation, programming language, or
operating systems.
The test cases are difficult to design.
White-Box Testing
White-box testing is the detailed investigation of internal logic and structure of the code. White-
box testing is also called glass testing or open-box testing. In order to perform white-
box testing on an application, a tester needs to know the internal workings of the code.
The tester needs to have a look inside the source code and find out which unit/chunk of the code
is behaving inappropriately.
What is White Box Testing?
White box testing is a testing technique, that examines the program structure and derives test
data from the program logic/code. The other names of glass box testing are clear box testing,
open box testing, logic driven testing or path driven testing or structural testing.
White Box Testing Techniques:
Statement Coverage - This technique is aimed at exercising all programming statements
with minimal tests.
Branch Coverage - This technique is running a series of tests to ensure that all branches
are tested at least once.
Path Coverage - This technique corresponds to testing all possible paths which means
that each statement and branch is covered.
The following table lists the advantages and disadvantages of white-box testing.
Advantages Disadvantages
As the tester has knowledge of the source code, it
becomes very easy to find out which type of data
can help in testing the application effectively.
Due to the fact that a skilled tester is needed to
perform white-box testing, the costs are increased.
It helps in optimizing the code. Sometimes it is impossible to look into every nook
and corner to find out hidden errors that may create
problems, as many paths will go untested.
Extra lines of code can be removed which can
bring in hidden defects.
It is difficult to maintain white-box testing, as it
requires specialized tools like code analyzers and
debugging tools.
Due to the tester's knowledge about the code,
maximum coverage is attained during test scenario
writing.
Difference between white and black box testing
White-Box Testing Black-box Testing
White-box testing known as glass-box
testing.
Black-box testing also called as behavioral testing.
It starts early in the testing process. It is applied in the final stages of testing.
In this testing knowledge of
implementation is needed.
In this testing knowledge of implementation is not needed.
White box testing is mainly done by
the developer.
This testing is done by the testers.
In this testing, the tester must be
technically sound.
In black box testing, testers may or may not be technically sound.
Various white box testing methods
are:
Basic Path Testing and Control
Structure Testing.
Various black box testing are:
Graph-Based testing method, Equivalence partitioning, Boundary
Value Analysis, Orthogonal Array Testing.
Software engineering - Layered technology
Software engineering is a fully layered technology.
To develop a software, we need to go from one layer to another.
All these layers are related to each other and each layer demands the fulfillment of the
previous layer.
The layered technology consists of:
1. Quality focus
The characteristics of good quality software are:
Correctness of the functions required to be performed by the software.
Maintainability of the software
Integrity i.e. providing security so that the unauthorized user cannot access information or
data.
Usability i.e. the efforts required to use or operate the software.
2. Process
It is the base layer or foundation layer for the software engineering.
The software process is the key to keep all levels together.
It defines a framework that includes different activities and tasks.
In short, it covers all activities, actions and tasks required to be carried out for software
development.
3. Methods
The method provides the answers of all 'how-to' that are asked during the process.
It provides the technical way to implement the software.
It includes collection of tasks starting from communication, requirement analysis,
analysis and design modelling, program construction, testing and support.
4. Tools
The software engineering tool is an automated support for the software development.
The tools are integrated i.e the information created by one tool can be used by the other
tool.
For example: The Microsoft publisher can be used as a web designing tool.
The process of framework defines a small set of activities that are applicable to all types
of projects.
The software process framework is a collection of task sets.
Task sets consist of a collection of small work tasks, project milestones, work
productivity and software quality assurance points.
Umbrella activities
Typical umbrella activities are:
1. Software project tracking and control
In this activity, the developing team accesses project plan and compares it with the
predefined schedule.
If these project plans do not match with the predefined schedule, then the required actions
are taken to maintain the schedule.
2. Risk management
Risk is an event that may or may not occur.
If the event occurs, then it causes some unwanted outcome. Hence, proper risk
management is required.
3. Software Quality Assurance (SQA)
SQA is the planned and systematic pattern of activities which are required to give a
guarantee of software quality.
For example, during the software development meetings are conducted at every stage of
development to find out the defects and suggest improvements to produce good quality
software.
4. Formal Technical Reviews (FTR)
FTR is a meeting conducted by the technical staff.
The motive of the meeting is to detect quality problems and suggest improvements.
The technical person focuses on the quality of the software from the customer point of
view.
5. Measurement
Measurement consists of the effort required to measure the software.
The software cannot be measured directly. It is measured by direct and indirect measures.
Direct measures like cost, lines of code, size of software etc.
Indirect measures such as quality of software which is measured by some other factor.
Hence, it is an indirect measure of software.
6. Software Configuration Management (SCM)
It manages the effect of change throughout the software process.
7. Reusability management
It defines the criteria for reuse the product.
The quality of software is good when the components of the software are developed for
certain application and are useful for developing other applications.
8. Work product preparation and production
It consists of the activities that are needed to create the documents, forms, lists, logs and
user manuals for developing a software.
A software process is a collection of various activities.
There are five generic process framework activities:
1. Communication: The software development starts with the communication between customer and developer.
2. Planning:
It consists of complete estimation, scheduling for project development and tracking.
3. Modeling:
Modeling consists of complete requirement analysis and the design of the project like
algorithm, flowchart etc.
The algorithm is the step-by-step solution of the problem and the flow chart shows a
complete flow diagram of a program.
4. Construction:
Construction consists of code generation and the testing part.
Coding part implements the design details using an appropriate programming language.
Testing is to check whether the flow of coding is correct or not.
Testing also check that the program provides desired output.
5. Deployment:
Deployment step consists of delivering the product to the customer and take feedback
from them.
If the customer wants some corrections or demands for the additional capabilities, then
the change is required for improvement in the quality of the software.
Prescriptive Process Models
Evolutionary Process Models in Software Engineering
The following framework activities are carried out irrespective of the process model chosen by
the organization.
1. Communication
2. Planning
3. Modeling
4. Construction
5. Deployment
The name 'prescriptive' is given because the model prescribes a set of activities, actions, tasks,
quality assurance and change the mechanism for every project.
There are three types of prescriptive process models. They are:
1. The Waterfall Model
2. Incremental Process model
3. RAD model
1. The Waterfall Model
The waterfall model is also called as 'Linear sequential model' or 'Classic life cycle
model'.
In this model, each phase is fully completed before the beginning of the next phase.
This model is used for the small projects.
In this model, feedback is taken after each phase to ensure that the project is on the right
path.
Testing part starts only after the development is complete.
NOTE: The description of the phases of the waterfall model is same as that of the process
model.
An alternative design for 'linear sequential model' is as follows:
Advantages of waterfall model
The waterfall model is simple and easy to understand, implement, and use.
All the requirements are known at the beginning of the project, hence it is easy to
manage.
It avoids overlapping of phases because each phase is completed at once.
This model works for small projects because the requirements are understood very well.
This model is preferred for those projects where the quality is more important as
compared to the cost of the project.
Disadvantages of the waterfall model
This model is not good for complex and object oriented projects.
It is a poor model for long projects.
The problems with this model are uncovered, until the software testing.
The amount of risk is high.
2. Incremental Process model
The incremental model combines the elements of waterfall model and they are applied in
an iterative fashion.
The first increment in this model is generally a core product.
Each increment builds the product and submits it to the customer for any suggested
modifications.
The next increment implements on the customer's suggestions and add additional
requirements in the previous increment.
This process is repeated until the product is finished.
For example, the word-processing software is developed using the incremental model.
Advantages of incremental model
This model is flexible because the cost of development is low and initial product delivery
is faster.
It is easier to test and debug during the smaller iteration.
The working software generates quickly and early during the software life cycle.
The customers can respond to its functionalities after every increment.
Disadvantages of the incremental model
The cost of the final product may cross the cost estimated initially.
This model requires a very clear and complete planning.
The planning of design is required before the whole system is broken into small
increments.
The demands of customer for the additional functionalities after every increment causes
problem during the system architecture.
3. RAD model
RAD is a Rapid Application Development model.
Using the RAD model, software product is developed in a short period of time.
The initial activity starts with the communication between customer and developer.
Planning depends upon the initial requirements and then the requirements are divided into
groups.
Planning is more important to work together on different modules.
The RAD model consist of following phases:
1. Business Modeling
Business modeling consist of the flow of information between various functions in the
project.
For example what type of information is produced by every function and which are the
functions to handle that information.
A complete business analysis should be performed to get the essential business
information.
2. Data modeling
The information in the business modeling phase is refined into the set of objects and it is
essential for the business.
The attributes of each object are identified and define the relationship between objects.
3. Process modeling
The data objects defined in the data modeling phase are changed to fulfil the information
flow to implement the business model.
The process description is created for adding, modifying, deleting or retrieving a data
object.
4. Application generation
In the application generation phase, the actual system is built.
To construct the software the automated tools are used.
5. Testing and turnover
The prototypes are independently tested after each iteration so that the overall testing
time is reduced.
The data flow and the interfaces between all the components are are fully tested. Hence,
most of the programming components are already tested.
Evolutionary Process
Models
Evolutionary models are iterative type models.
They allow to develop more complete versions of the software.
Following are the evolutionary process models.
1. The prototyping model
2. The spiral model
3. Concurrent development model
1. The Prototyping model
Prototype is defined as first or preliminary form using which other forms are copied or
derived.
Prototype model is a set of general objectives for software.
It does not identify the requirements like detailed input, output.
It is software working model of limited functionality.
In this model, working programs are quickly produced.
The different phases of Prototyping model are:
1. Communication In this phase, developer and customer meet and discuss the overall objectives of the software.
2. Quick design
Quick design is implemented when requirements are known.
It includes only the important aspects like input and output format of the software.
It focuses on those aspects which are visible to the user rather than the detailed plan.
It helps to construct a prototype.
3. Modeling quick design
This phase gives the clear idea about the development of software because the software is
now built.
It allows the developer to better understand the exact requirements.
4. Construction of prototype The prototype is evaluated by the customer itself.
5. Deployment, delivery, feedback
If the user is not satisfied with current prototype then it refines according to the
requirements of the user.
The process of refining the prototype is repeated until all the requirements of users are
met.
When the users are satisfied with the developed prototype then the system is developed
on the basis of final prototype.
Advantages of Prototyping Model
Prototype model need not know the detailed input, output, processes, adaptability of
operating system and full machine interaction.
In the development process of this model users are actively involved.
The development process is the best platform to understand the system by the user.
Errors are detected much earlier.
Gives quick user feedback for better solutions.
It identifies the missing functionality easily. It also identifies the confusing or difficult
functions.
Disadvantages of Prototyping Model:
The client involvement is more and it is not always considered by the developer.
It is a slow process because it takes more time for development.
Many changes can disturb the rhythm of the development team.
It is a thrown away prototype when the users are confused with it.
2. The Spiral model
Spiral model is a risk driven process model.
It is used for generating the software projects.
In spiral model, an alternate solution is provided if the risk is found in the risk analysis,
then alternate solutions are suggested and implemented.
It is a combination of prototype and sequential model or waterfall model.
In one iteration all activities are done, for large project's the output is small.
The framework activities of the spiral model are as shown in the following figure.
NOTE: The description of the phases of the spiral model is same as that of the process model.
Advantages of Spiral Model
It reduces high amount of risk.
It is good for large and critical projects.
It gives strong approval and documentation control.
In spiral model, the software is produced early in the life cycle process.
Disadvantages of Spiral Model
It can be costly to develop a software model.
It is not used for small projects.
3. The concurrent development model
The concurrent development model is called as concurrent model.
The communication activity has completed in the first iteration and exits in the awaiting
changes state.
The modeling activity completed its initial communication and then go to the
underdevelopment state.
If the customer specifies the change in the requirement, then the modeling activity moves
from the under development state into the awaiting change state.
The concurrent process model activities moving from one state to another state.
Advantages of the concurrent development model
This model is applicable to all types of software development processes.
It is easy for understanding and use.
It gives immediate feedback from testing.
It provides an accurate picture of the current state of a project.
Disadvantages of the concurrent development model
It needs better communication between the team members. This may not be achieved all
the time.
It requires to remember the status of the different activities.
Agile principles
The highest priority of this process is to satisfy the customer.
Acceptance of changing requirement even late in development.
Frequently deliver a working software in small time span.
Throughout the project business people and developers work together on daily basis.
Projects are created around motivated people if they are given the proper environment
and support.
Face to face interaction is the most efficient method of moving information in the
development team.
Primary measure of progress is a working software.
Agile process helps in sustainable development.
Continuous attention to technical excellence and good design increases agility.
From self organizing teams the best architecture, design and requirements are emerged.
Simplicity is necessary in development.
Extreme Programming (XP)
The Extreme Programming is commonly used agile process model.
It uses the concept of object-oriented programming.
A developer focuses on the framework activities like planning, design, coding and
testing. XP has a set of rules and practices.
XP values
Following are the values for extreme programming:
1. Communication
Building software development process needs communication between the developer and
the customer.
Communication is important for requirement gathering and discussing the concept.
2) Simplicity The simple design is easy to implement in code.
3. Feedback
Feedback guides the development process in the right direction.
4. Courage In every development process there will always be a pressure situation. The courage or the
discipline to deal with it surely makes the task easy.
5. Respect Agile process should inculcate the habit to respect all team members, other stake holders and
customer.
The XP Process
The XP process comprises four framework activities:
1. Planning
Planning starts with the requirements gathering which enables XP team to understand the
rules for the software.
The customer and developer work together for the final requirements.
2. Design
The XP design follows the 'keep it simple' principle.
A simple design always prefers the more difficult representation.
3. Coding
The coding is started after the initial design work is over.
After the initial design work is done, the team creates a set of unit tests which can test
each situation that should be a part of the release.
The developer is focused on what must be implemented to pass the test.
Two people are assigned to create the code. It is an important concept in coding activity.
4. Testing
Validation testing of the system occurs on a daily basis. It gives the XP team a regular
indication of the progress.
'XP acceptance tests' are known as the customer test.
Scrum
Scrum is an agile software development method.
Scrum principles are consistent with the agile platform that are used to guide
development activities within a process.
It includes the framework activities like requirement, analysis, design, evolution and
delivery.
Work tasks occur within a process pattern in each framework activity called as 'sprint'.
Scrum highlights the use of a set of software process pattern that are effective for the
projects with tight timelines, changing requirements and business criticality.
Scrum consists of the use of a set of software process patterns.
Each process patterns defines a set of development actions which are as follows:
Backlog
A prioritized list of project requirements or features that provide business value for the
customer.
Items can be added to the backlog at any time.
The product manager accesses the backlog and updates priorities, as required.
Sprints
It consists of work units that are required to achieve a requirement defined in the backlog.
Changes are not introduced during the sprints. It allows team members to work in short-
term but in the stable environment.
Scrum meeting
The short meetings are held daily by the scrum team.
The key questions are asked and answered by all team members.
Daily meetings guide to 'knowledge socialization' and that encourages a self-organizing
team structure.
Demos
Deliver the software increment to the customer. Using which the customer evaluates and
demonstrates the functionalities.
Introduction to cleanroom software engineering
It is an engineering approach which is used to build correctness in developed software.
The main concept behind the cleanroom software engineering is to remove the
dependency on the costly processes.
The cleanroom software engineering includes the quality approach of writing the code
from the beginning of the system and finally gathers into a complete a system.
Following tasks occur in cleanroom engineering:
1. Incremental planning
In this task, the incremental plan is developed.
The functionality of each increment, projected size of the increment and the cleanroom
development schedule is created.
The care is to be taken that each increment is certified and integrated in proper time
according to the plan.
2. Requirements gathering
Requirement gathering is done using the traditional techniques like analysis, design,
code, test and debug.
A more detailed description of the customer level requirement is developed.
3. Box structure specification
The specification method uses box structure.
Box structure is used to describe the functional specification.
The box structure separate and isolate the behaviour, data and procedure in each
increment.
4. Formal design
The cleanroom design is a natural specification by using the black box structure
approach.
The specification is called as state boxes and the component level diagram called as the
clear boxes.
5. Correctness verification
The cleanroom conducts the exact correctness verification activities on the design and
then the code.
Verification starts with the highest level testing box structure and then moves toward the
design detail and code.
The first level of correctness takes place by applying a set of 'correcting questions'.
More mathematical or formal methods are used for verification if correctness does not
signify that the specification is correct.
6. Code generation, inspection and verification
The box structure specification is represented in a specialized language and these are
translated into the appropriate programming language.
Use the technical reviews for the syntactic correctness of the code.
7. Statical test planning
Analyzed, planned and designed the projected usages of the software.
The cleanroom activity is organized in parallel with specification, verification and code
generation.
8. Statistical use testing
The exhaustive testing of computer software is impossible. It is compulsory to design
limited number of test cases.
Statistical use technique execute a set of tests derived from a statistical sample in all
possible program executions.
These samples are collected from the users from a targeted population.
9. Certification
After the verification, inspection and correctness of all errors, the increments are certified
and ready for integration.
Cleanroom process model
The modeling approach in cleanroom software engineering uses a method called box
structure specification.
A 'box' contains the system or the aspect of the system in detail.
The information in each box specification is sufficient to define its refinement without
depending on the implementation of other boxes.
The cleanroom process model uses three types of boxes as follows:
1. Black box
The black box identifies the behavior of a system.
The system responds to specific events by applying the set of transition rules.
2. State box
The box consist of state data or operations that are similar to the objects.
The state box represents the history of the black box i.e the data contained in the state
box must be maintained in all transitions.
3. Clear box
The transition function used by the state box is defined in the clear box.
It simply states that a clear box includes the procedural design for the state box.
Introduction to requirement engineering
The process of collecting the software requirement from the client then understand,
evaluate and document it is called as requirement engineering.
Requirement engineering constructs a bridge for design and construction.
Requirement engineering consists of seven different tasks as follow:
1. Inception
Inception is a task where the requirement engineering asks a set of questions to establish
a software process.
In this task, it understands the problem and evaluates with the proper solution.
It collaborates with the relationship between the customer and the developer.
The developer and customer decide the overall scope and the nature of the question.
2. Elicitation Elicitation means to find the requirements from anybody.
The requirements are difficult because the following problems occur in elicitation.
Problem of scope: The customer give the unnecessary technical detail rather than clarity of the
overall system objective.
Problem of understanding: Poor understanding between the customer and the developer
regarding various aspect of the project like capability, limitation of the computing environment.
Problem of volatility: In this problem, the requirements change from time to time and it is
difficult while developing the project.
3. Elaboration
In this task, the information taken from user during inception and elaboration and are
expanded and refined in elaboration.
Its main task is developing pure model of software using functions, feature and
constraints of a software.
4. Negotiation
In negotiation task, a software engineer decides the how will the project be achieved
with limited business resources.
To create rough guesses of development and access the impact of the requirement on the
project cost and delivery time.
5. Specification
In this task, the requirement engineer constructs a final work product.
The work product is in the form of software requirement specification.
In this task, formalize the requirement of the proposed software such as informative,
functional and behavioral.
The requirement are formalize in both graphical and textual formats.
6. Validation
The work product is built as an output of the requirement engineering and that is accessed
for the quality through a validation step.
The formal technical reviews from the software engineer, customer and other
stakeholders helps for the primary requirements validation mechanism.
7. Requirement management
It is a set of activities that help the project team to identify, control and track the
requirements and changes can be made to the requirements at any time of the ongoing
project.
These tasks start with the identification and assign a unique identifier to each of the
requirement.
After finalizing the requirement traceability table is developed.
The examples of traceability table are the features, sources, dependencies, subsystems
and interface of the requirement.
Eliciting Requirements
Eliciting requirement helps the user for collecting the requirement
Eliciting requirement steps are as follows:
1. Collaborative requirements gathering
Gathering the requirements by conducting the meetings between developer and customer.
Fix the rules for preparation and participation.
The main motive is to identify the problem, give the solutions for the elements, negotiate
the different approaches and specify the primary set of solution requirements in an
environment which is valuable for achieving goal.
2. Quality Function Deployment (QFD)
In this technique, translate the customer need into the technical requirement for the
software.
QFD system designs a software according to the demands of the customer.
QFD consist of three types of requirement:
Normal requirements
The objective and goal are stated for the system through the meetings with the customer.
For the customer satisfaction these requirements should be there.
Expected requirement
These requirements are implicit.
These are the basic requirement that not be clearly told by the customer, but also the
customer expect that requirement.
Exciting requirements
These features are beyond the expectation of the customer.
The developer adds some additional features or unexpected feature into the software to
make the customer more satisfied.
For example, the mobile phone with standard features, but the developer adds few
additional functionalities like voice searching, multi-touch screen etc. then the customer
more exited about that feature.
3. Usage scenarios
Till the software team does not understand how the features and function are used by the
end users it is difficult to move technical activities.
To achieve above problem the software team produces a set of structure that identify the
usage for the software.
This structure is called as 'Use Cases'.
4. Elicitation work product
The work product created as a result of requirement elicitation that is depending on the
size of the system or product to be built.
The work product consists of a statement need, feasibility, statement scope for the
system.
It also consists of a list of users participate in the requirement elicitation.
Analysis Model in Software Engineering
Analysis model operates as a link between the 'system description' and the 'design model'.
In the analysis model, information, functions and the behaviour of the system is defined
and these are translated into the architecture, interface and component level design in the
'design modeling'.
Elements of the analysis model
1. Scenario based element
This type of element represents the system user point of view.
Scenario based elements are use case diagram, user stories.
2. Class based elements
The object of this type of element manipulated by the system.
It defines the object,attributes and relationship.
The collaboration is occurring between the classes.
Class based elements are the class diagram, collaboration diagram.
3. Behavioral elements
Behavioral elements represent state of the system and how it is changed by the external
events.
The behavioral elements are sequenced diagram, state diagram.
4. Flow oriented elements
An information flows through a computer-based system it gets transformed.
It shows how the data objects are transformed while they flow between the various
system functions.
The flow elements are data flow diagram, control flow diagram.
Analysis Rules of Thumb
The rules of thumb that must be followed while creating the analysis model.
The rules are as follows:
The model focuses on the requirements in the business domain. The level of abstraction
must be high i.e there is no need to give details.
Every element in the model helps in understanding the software requirement and focus on
the information, function and behaviour of the system.
The consideration of infrastructure and nonfunctional model delayed in the design.
For example, the database is required for a system, but the classes, functions and
behavior of the database are not initially required. If these are initially considered then
there is a delay in the designing.
Throughout the system minimum coupling is required. The interconnections between the
modules is known as 'coupling'.
The analysis model gives value to all the people related to model.
The model should be simple as possible. Because simple model always helps in easy
understanding of the requirement.
Concepts of data modeling
Analysis modeling starts with the data modeling.
The software engineer defines all the data object that proceeds within the system and the
relationship between data objects are identified.
Data objects
The data object is the representation of composite information.
The composite information means an object has a number of different properties or
attribute.
For example, Height is a single value so it is not a valid data object, but dimensions
contain the height, the width and depth these are defined as an object.
Data Attributes Each of the data object has a set of attributes.
Data object has the following characteristics:
Name an instance of the data object.
Describe the instance.
Make reference to another instance in another table.
Relationship Relationship shows the relationship between data objects and how they are related to each other.
Cardinality
Cardinality state the number of events of one object related to the number of events of another
object.
The cardinality expressed as:
One to one (1:1)
One event of an object is related to one event of another object.
For example, one employee has only one ID.
One to many (1:N)
One event of an object is related to many events.
For example, One collage has many departments.
Many to many(M:N)
Many events of one object are related to many events of another object.
For example, many customer place order for many products.
Modality
If an event relationship is an optional then the modality of relationship is zero.
If an event of relationship is compulsory then modality of relationship is one.
Requirement modeling strategies
Following are the requirement modeling strategies:
1. Flow Oriented Modeling
2. Class-based Modeling
1. Flow Oriented Modeling
It shows how data objects are transformed by processing the function.
The Flow oriented elements are:
i. Data flow model
It is a graphical technique. It is used to represent information flow.
The data objects are flowing within the software and transformed by processing the
elements.
The data objects are represented by labeled arrows. Transformation are represented by
circles called as bubbles.
DFD shown in a hierarchical fashion. The DFD is split into different levels. It also called
as 'context level diagram'.
ii. Control flow model
Large class applications require a control flow modeling.
The application creates control information instated of reports or displays.
The applications process the information in specified time.
An event is implemented as a boolean value.
For example, the boolean values are true or false, on or off, 1 or 0.
iii. Control Specification
A short term for control specification is CSPEC.
It represents the behaviour of the system.
The state diagram in CSPEC is a sequential specification of the behaviour.
The state diagram includes states, transitions, events and activities.
State diagram shows the transition from one state to another state if a particular event has
occurred.
iv. Process Specification
A short term for process specification is PSPEC.
The process specification is used to describe all flow model processes.
The content of process specification consists narrative text, Program Design
Language(PDL) of the process algorithm, mathematical equations, tables or UML
activity diagram.
2. Class-based Modeling
Class based modeling represents the object. The system manipulates the operations.
The elements of the class based model consist of classes and object, attributes,
operations, class – responsibility - collaborator (CRS) models.
Classes Classes are determined using underlining each noun or noun clause and enter it into the simple
table.
Classes are found in following forms:
External entities: The system, people or the device generates the information that is used
by the computer based system.
Things: The reports, displays, letter, signal are the part of the information domain or the
problem.
Occurrences or events: A property transfer or the completion of a series or robot
movements occurs in the context of the system operation.
Roles: The people like manager, engineer, salesperson are interacting with the system.
Organizational units: The division, group, team are suitable for an application.
Places: The manufacturing floor or loading dock from the context of the problem and the
overall function of the system.
Structures: The sensors, computers are defined a class of objects or related classes of
objects.
Attributes
Attributes are the set of data objects that are defining a complete class within the context of the
problem.
For example, 'employee' is a class and it consists of name, Id, department, designation and
salary of the employee are the attributes.
Operations
The operations define the behaviour of an object.
The operations are characterized into following types:
The operations manipulate the data like adding, modifying, deleting and displaying etc.
The operations perform a computation.
The operation monitors the objects for the occurrence of controlling an event.
CRS Modeling
The CRS stands for Class-Responsibility-Collaborator.
It provides a simple method for identifying and organizing the classes that are applicable
to the system or product requirement.
Class is an object-oriented class name. It consists of information about sub classes and
super class.
Responsibilities are the attributes and operations that are related to the class.
Collaborations are identified and determined when a class can achieve each responsibility
of it. If the class cannot identify itself, then it needs to interact with another class.
Behavioral patterns for requirement modeling
Behavioral model shows the response of software to an external event.
Steps for creating behavioral patterns for requirement modeling as follows:
Evaluate all the use cases to completely understand the sequence, interaction within the
system.
Identify the event and understand the relation between the specific event.
Generate a sequence for each use case.
Construct a state diagram for the system.
To verify the accuracy and consistency review the behavioral model.
Software Requirement Specification (SRS)
The requirements are specified in specific format known as SRS.
This document is created before starting the development work.
The software requirement specification is an official document.
It shows the detail about the performance of expected system.
SRS indicates to a developer and a customer what is implemented in the software.
SRS is useful if the software system is developed by the outside contractor.
SRS must include an interface, functional capabilities, quality, reliability, privacy etc.
Characteristics of SRS
The SRS should be complete and consistence.
The modification like logical and hierarchical must be allowed in SRS.
The requirement should be easy to implement.
Each requirement should be uniquely identified.
The statement in SRS must be unambiguous means it should have only one meaning.
All the requirement must be valid for the specified project.
Introduction to design process
The main aim of design engineering is to generate a model which shows firmness, delight
and commodity.
Software design is an iterative process through which requirements are translated into the
blueprint for building the software.
Software quality guidelines
A design is generated using the recognizable architectural styles and compose a good
design characteristic of components and it is implemented in evolutionary manner for
testing.
A design of the software must be modular i.e the software must be logically partitioned
into elements.
In design, the representation of data , architecture, interface and components should be
distinct.
A design must carry appropriate data structure and recognizable data patterns.
Design components must show the independent functional characteristic.
A design creates an interface that reduce the complexity of connections between the
components.
A design must be derived using the repeatable method.
The notations should be use in design which can effectively communicates its meaning.
Quality attributes
The attributes of design name as 'FURPS' are as follows:
Functionality:
It evaluates the feature set and capabilities of the program.
Usability:
It is accessed by considering the factors such as human factor, overall aesthetics, consistency and
documentation.
Reliability:
It is evaluated by measuring parameters like frequency and security of failure, output result
accuracy, the mean-time-to-failure(MTTF), recovery from failure and the the program
predictability.
Performance:
It is measured by considering processing speed, response time, resource consumption,
throughput and efficiency.
Supportability:
It combines the ability to extend the program, adaptability, serviceability. These three
term defines the maintainability.
Testability, compatibility and configurability are the terms using which a system can be
easily installed and found the problem easily.
Supportability also consists of more attributes such as compatibility, extensibility, fault
tolerance, modularity, reusability, robustness, security, portability, scalability.
Design concepts
The set of fundamental software design concepts are as follows:
1. Abstraction
A solution is stated in large terms using the language of the problem environment at the
highest level abstraction.
The lower level of abstraction provides a more detail description of the solution.
A sequence of instruction that contain a specific and limited function refers in a
procedural abstraction.
A collection of data that describes a data object is a data abstraction.
2. Architecture
The complete structure of the software is known as software architecture.
Structure provides conceptual integrity for a system in a number of ways.
The architecture is the structure of program modules where they interact with each other
in a specialized way.
The components use the structure of data.
The aim of the software design is to obtain an architectural framework of a system.
The more detailed design activities are conducted from the framework.
3. Patterns A design pattern describes a design structure and that structure solves a particular design
problem in a specified content.
4. Modularity
A software is separately divided into name and addressable components. Sometime they
are called as modules which integrate to satisfy the problem requirements.
Modularity is the single attribute of a software that permits a program to be managed
easily.
5. Information hiding
Modules must be specified and designed so that the information like algorithm and data
presented in a module is not accessible for other modules not requiring that information.
6. Functional independence
The functional independence is the concept of separation and related to the concept of
modularity, abstraction and information hiding.
The functional independence is accessed using two criteria i.e Cohesion and coupling.
Cohesion
Cohesion is an extension of the information hiding concept.
A cohesive module performs a single task and it requires a small interaction with the
other components in other parts of the program.
Coupling
Coupling is an indication of interconnection between modules in a structure of software.
7. Refinement
Refinement is a top-down design approach.
It is a process of elaboration.
A program is established for refining levels of procedural details.
A hierarchy is established by decomposing a statement of function in a stepwise manner
till the programming language statement are reached.
8. Refactoring
It is a reorganization technique which simplifies the design of components without
changing its function behaviour.
Refactoring is the process of changing the software system in a way that it does not
change the external behaviour of the code still improves its internal structure.
9. Design classes
The model of software is defined as a set of design classes.
Every class describes the elements of problem domain and that focus on features of the
problem which are user visible.
OO design concept in Software Engineering
Software design model elements
Object Oriented is a popular design approach for analyzing and designing an application.
Most of the languages like C++, Java, .net are use object oriented design concept.
Object-oriented concepts are used in the design methods such as classes, objects,
polymorphism, encapsulation, inheritance, dynamic binding, information hiding,
interface, constructor, destructor.
The main advantage of object oriented design is that improving the software
development and maintainability.
Another advantage is that faster and low cost development, and creates a high quality
software.
The disadvantage of the object-oriented design is that larger program size and it is not
suitable for all types of program.
Design classes
A set of design classes refined the analysis class by providing design details.
There are five different types of design classes and each type represents the layer of the
design architecture these are as follows:
1. User interface classes
These classes are designed for Human Computer Interaction(HCI).
These interface classes define all abstraction which is required for Human Computer
Interaction(HCI).
2. Business domain classes
These classes are commonly refinements of the analysis classes.
These classes are recognized as attributes and methods which are required to implement
the elements of the business domain.
3. Process classes
It implement the lower level business abstraction which is needed to completely manage the
business domain class.
4. Persistence classes
It shows data stores that will persist behind the execution of the software.
5. System Classes System classes implement software management and control functions that allow to operate and
communicate in computing environment and outside world.
Design class characteristic
The characteristic of well formed designed class are as follows:
1. Complete and sufficient
A design class must be the total encapsulation of all attributes and methods which are required to
exist for the class.
2. Primitiveness
The method in the design class should fulfil one service for the class.
If service is implemented with a method then the class should not provide another way to
fulfil same thing.
3. High cohesion
A cohesion design class has a small and focused set of responsibilities.
For implementing the set of responsibilities the design classes are applied single-
mindedly to the methods and attribute.
4. Low-coupling
All the design classes should collaborate with each other in a design model.
The minimum acceptable of collaboration must be kept in this model.
If a design model is highly coupled then the system is difficult to implement, to test and
to maintain over time.
Following are the types of design elements:
1. Data design elements
The data design element produced a model of data that represent a high level of
abstraction.
This model is then more refined into more implementation specific representation which
is processed by the computer based system.
The structure of data is the most important part of the software design.
2. Architectural design elements
The architecture design elements provides us overall view of the system.
The architectural design element is generally represented as a set of interconnected
subsystem that are derived from analysis packages in the requirement model.
The architecture model is derived from following sources:
The information about the application domain to built the software.
Requirement model elements like data flow diagram or analysis classes, relationship and
collaboration between them.
The architectural style and pattern as per availability.
3. Interface design elements
The interface design elements for software represents the information flow within it and
out of the system.
They communicate between the components defined as part of architecture.
Following are the important elements of the interface design: 1. The user interface
2. The external interface to the other systems, networks etc.
3. The internal interface between various components.
4. Component level diagram elements
The component level design for software is similar to the set of detailed specification of
each room in a house.
The component level design for the software completely describes the internal details of
the each software component.
The processing of data structure occurs in a component and an interface which allows all
the component operations.
In a context of object-oriented software engineering, a component shown in a UML
diagram.
The UML diagram is used to represent the processing logic.
5. Deployment level design elements
The deployment level design element shows the software functionality and subsystem
that allocated in the physical computing environment which support the software.
Following figure shows three computing environment as shown. These are the personal
computer, the CPI server and the Control panel.
Software Architecture Introduction
The concept of software architecture is similar to the architecture of building.
The architecture is not an operational software.
The software architecture focuses on the role of software components.
Software components consist of a simple program module or an object oriented class in
an architectural design.
The architecture design extended and it consists of the database and the middleware that
allows the configuration of a network of clients and servers.
Importance of software architecture
Following are the reasons for the importance of software architecture.
1. The representation of software architecture allows the communication between all stakeholder
and the developer.
2. The architecture focuses on the early design decisions that impact on all software engineering
work and it is the ultimate success of the system.
3. The software architecture composes a small and intellectually graspable model.
4. This model helps the system for integrating the components using which the components are
work together.
The architectural style
The architectural style is a transformation and it is applied to the design of an entire
system.
The main aim of architectural style is to build a structure for all components of the
system.
An architecture of the system is redefined by using the architectural style.
An architectural pattern such as architectural style introduces a transformation on the
design of an architecture.
The software is constructed for computer based system and it shows one of the
architectural style from many of style.
The design categories of architectural styles includes:
1. A set of components such as database, computational modules which perform the function
required by the system.
2. A set of connectors that allows the communication, coordination and cooperation between the
components.
3. The constraints which define the integration of components to form the system.
4. Semantic model allows a designer to understand the overall properties of a system by using
analysis of elements.
Architectural design
The architectural design starts then the developed software is put into the context.
The information is obtained from the requirement model and other information collect
during the requirement engineering.
Representing the system in context
All the following entities communicates with the target system through the interface that is small
rectangles shown in above figure.
Superordinate system
These system use the target system like a part of some higher-level processing scheme.
Subordinate system This systems is used by the target system and provide the data mandatory to complete target
system functionality.
Peer-level system These system interact on peer-to-peer basis means the information is consumed by the target
system and the peers.
Actors These are the entities like people, device which interact with the target system by consuming
information that is mandatory for requisite processing.
Defining Archetype
An archetype is a class or pattern which represents a core abstraction i.e critical to
implement or design for the target system.
A small set of archetype is needed to design even the systems are relatively complex.
The target system consists of archetype that represent the stable elements of the
architecture.
Archetype is instantiated in many different forms based on the behavior of the system.
In many cases, the archetype is obtained by examining the analysis of classes defined as a
part of the requirement model.
An Architecture Trade-off Analysis Method (ATAM)
ATAM was developed by the Software Engineering Institute (SEI) which started an iterative
evaluation process for software architecture.
The design analysis activities which are executed iteratively that are as follows:
1. Collect framework
Collect framework developed a set of use cases that represent the system according to user point
of view.
2. Obtained requirement, Constraints, description of the environment.
These types of information are found as a part of requirement engineering and is used to verify
all the stakeholders are addressed properly.
3. Describe the architectural pattern
The architectural patterns are described using an architectural views which are as follows:
Module view: This view is for the analysis of assignment work with the components and the
degree in which abstraction or information hiding is achieved
Process view: This view is for the analysis of the software or system performance.
Data flow view: This view analyzes the level and check whether functional requirements are met
to the architecture.
4. Consider the quality attribute in segregation The quality attributes for architectural design consist of reliability, performance, security,
maintainability, flexibility, testability, portability, re-usability etc.
5. Identify the quality attributes sensitivity
The sensitivity of quality attributes achieved by making the small changes in the
architecture and find the sensitivity of the quality attribute which affects the performance.
The attributes affected by the variation in the architecture are known as sensitivity points.
Architectural styles for Software Design
The architectural styles that are used while designing the software as follows:
1. Data-centered architecture
The data store in the file or database is occupying at the center of the architecture. Store data is access continuously by the other components like an update, delete, add, modify
from the data store. Data-centered architecture helps integrity. Pass data between clients using the blackboard mechanism. The processes are independently executed by the client components.
2. Data-flow architecture
This architecture is applied when the input data is converted into a series of manipulative components into output data.
A pipe and filter pattern is a set of components called as filters. Filters are connected through pipes and transfer data from one component to the next
component. The flow of data degenerates into a single line of transform then it is known as batch sequential.
3. Call and return architectures
This architecture style allows to achieve a program structure which is easy to modify.
Following are the sub styles exist in this category:
1. Main program or subprogram architecture
The program is divided into smaller pieces hierarchically. The main program invokes many of program components in the hierarchy that program
components are divided into subprogram.
2. Remote procedure call architecture
The main program or subprogram components are distributed in network of multiple computers.
The main aim is to increase the performance.
4. Object-oriented architectures
This architecture is the latest version of call-and-return architecture. It consist of the bundling of data and methods.
5. Layered architectures
The different layers are defined in the architecture. It consists of outer and inner layer. The components of outer layer manage the user interface operations. Components execute the operating system interfacing at the inner layer. The inner layers are application layer, utility layer and the core layer. In many cases, It is possible that more than one pattern is suitable and the alternate
architectural style can be designed and evaluated.
Component design introduction
A software component is a modular building block for the computer software.
Component is defined as a modular, deployable and replaceable part of the system which
encloses the implementation and exposes a set of interfaces.
Components view
The components has different views as follows:
1. An object-oriented view
An object-oriented view is a set of collaborating classes.
The class inside a component is completely elaborated and it consists of all the attributes
and operations which are applicable to its implementation.
To achieve object-oriented design it elaborates analysis classes and the infrastructure
classes.
2. The traditional view
A traditional component is known as module.
It resides in the software and serves three important roles which are control component, a
problem domain component and an infrastructure component.
A control component coordinate is an invocation of all other problem domain
components.
A problem domain component implements a complete function which is needed by the
customer.
An infrastructure component is responsible for function which support the processing
needed in the problem domain.
3. The Process related view
This view highlights the building system out of existing components.
The design patterns are selected from a catalog and used to populate the architecture.
Class-based design components
The principles for class-based design component are as follows:
Open Closed Principle (OCP) Any module in OCP should be available for extension and modification.
The Liskov Substitution Principle (LSP)
The subclass must be substitutable for their base class.
This principle was suggested by Liskov.
Dependency Inversion Principle (DIP)
It depends on the abstraction and not on concretion.
Abstraction is the place where the design is extended without difficulty.
The Interface Segregation Principle (ISP) Many client specific interfaces is better than the general purpose interface.
The Release Reuse Equivalency Principle (REP)
A fragment of reuse is the fragment of release.
The class components are designed for reuse which is an indirect contract between the
developer and the user.
The common closure principle (CCP) The classes change and belong together i.e the classes are packaged as part of design which
should have the same address and functional area.
The Common Reuse Principle (CRP) The classes that are not reused together should not be grouped together.
User Interface design
User interface design helps in successing most of the software.
It is part of the user and computer.
Good interface design is user friendly.
Types of user interface:
1. Command Interpreter Commands help the user to communicate with the computer system.
2. Graphical User Interfaces (GUI)
It is another approach to communicate with system.
It allows a mouse-based, window-menu-based systems as an interface.
The Golden Rules
The golden rules are known as interface design principles.
The golden rule are as follows:
1. Place the user in control
The interaction should be defined in such a way that the user is not forced to implement
unnecessary actions.
The technical internal details must be hidden from the casual user.
Design for the direct interaction with objects that appear on the screen.
2. Reduce the user's memory load
The user interface must be designed in such a way that it reduces the demands on the
user's short term memory.
Create the meaningful defaults value as an advantage for the average users in the start of
application.
There must be a reset option for obtaining the default values.
The shortcut should be easily remembered by the users.
The interface screen should be friendly to users.
3. Make the interface consistent
The system must allow the user to put task into meaningful context.
Consistency should be maintained for all the interaction.
Do not change the past system that is created by the user expectation unless there is a
good reason to do that.
User interface design issues
The user interface design consist of following four issues:
1. Response time of the system
Length and variability are the two important characteristic of the system response time.
2. User help facilities The user of each software system needs the help facility or the user manual for the smooth use of
the software.
3. Error information handling
Many error messages and warnings are created which irritate the users as they are not
meaningful. Only the critical problems should be handled.
Error message guidelines are as follows:
The language of error message should be described in plain language i.e easily
understandable for the users.
For recovering the error,useful advice should be provided.
The error message must have an audible or visual indications like beep, short flashing or
the special error color.
The error messages must indicate any negative result so that the user verifies it.
The wordings of message should not be blamed on the user.
4. Command labeling The commands and menu labeling must be consistent, easy to understand and learn.
WebApp Interface design
The principles for designing the WebApp are as follows:
1. Anticipation A WebApp should be designed so that it can anticipate the user's next move.
2. Communication
The interface must communicate the status of any activity initiated by the user.
3. Consistency Throughout the WebApp the use of navigation control, menus, icons, decorative parts should be
consistent.
4. Efficiency The design of WebApp and its interfaces must optimize the user's work efficiency and not the
efficiency of developer.
5. Flexibility The system must be flexible to accommodate the need of user.
6. Focus
The WebApp interface and the content present in it must focus on the users task.
7. Readability The presented information must be readable to all the users.
8. Learnability
A WebApp interface must be designed to minimize the learning time.
After learned, It is necessary to minimize the relearning when the WebApp is revisited.
9. Metaphors An interface uses an interaction metaphor that is easier to learn and use until the metaphor is
appropriate for the application and the user.
10. Maintain the integrity of work product A work product should be saved automatically so it does not lose an error message which has
occurred automatically.
Introduction to Testing
Testing is a set of activities which are decided in advance i.e before the start of
development and organized systematically.
In the literature of software engineering various testing strategies to implement the testing
are defined.
All the strategies give a testing template.
Following are the characteristic that process the testing templates:
The developer should conduct the successful technical reviews to perform the testing
successful.
Testing starts with the component level and work from outside toward the integration of
the whole computer based system.
Different testing techniques are suitable at different point in time.
Testing is organized by the developer of the software and by an independent test group.
Debugging and testing are different activities, then also the debugging should be
accommodated in any strategy of testing.
Difference between Verification and Validation
Verification Validation
Verification is the process to find whether the
software meets the specified requirements for
particular phase.
The validation process is checked whether the
software meets requirements and expectation
of the customer.
It estimates an intermediate product. It estimates the final product.
The objectives of verification is to check whether
software is constructed according to requirement
and design specification.
The objectives of the validation is to check
whether the specifications are correct and
satisfy the business need.
It describes whether the outputs are as per the
inputs or not.
It explains whether they are accepted by the
user or not.
Verification is done before the validation. It is done after the verification.
Plans, requirement, specification, code are
evaluated during the verifications.
Actual product or software is tested under
validation.
It manually checks the files and document.
It is a computer software or developed
program based checking of files and
document.
Strategy of testing
A strategy of software testing is shown in the context of spiral.
Following figure shows the testing strategy:
Unit testing Unit testing starts at the centre and each unit is implemented in source code.
Integration testing
An integration testing focuses on the construction and design of the software.
Validation testing Check all the requirements like functional, behavioral and performance requirement are validate
against the construction software.
System testing System testing confirms all system elements and performance are tested entirely.
Testing strategy for procedural point of view
As per the procedural point of view the testing includes following steps.
1) Unit testing
2) Integration testing
3) High-order tests
4) Validation testing
These steps are shown in following figure:
Strategy Testing Issues
Following are the issues considered to implement software testing strategies.
Specify the requirement before testing starts in a quantifiable manner.
According to the categories of the user generate profiles for each category of user.
Produce a robust software and it's designed to test itself.
Should use the Formal Technical Reviews (FTR) for the effective testing.
To access the test strategy and test cases FTR should be conducted.
To improve the quality level of testing generate test plans from the users feedback.
Test strategies for conventional software
Following are the four strategies for conventional software:
1) Unit testing
2) Integration testing
3) Regression testing
4) Smoke testing
1) Unit testing
Unit testing focus on the smallest unit of software design, i.e module or software
component.
Test strategy conducted on each module interface to access the flow of input and output.
The local data structure is accessible to verify integrity during execution.
Boundary conditions are tested.
In which all error handling paths are tested.
An Independent path is tested.
Following figure shows the unit testing:
Unit test environment
The unit test environment is as shown in following figure:
Difference between stub and driver
Stub Driver
Stub is considered as subprogram. It is a simple main program.
Stub does not accept test case data. Driver accepts test case data.
It replace the modules of the program into
subprograms and are tested by the next driver.
Pass the data to the tested components
and print the returned result.
2) Integration testing
Integration testing is used for the construction of software architecture.
There are two approaches of incremental testing are: i) Non incremental integration testing
ii) Incremental integration testing
i) Non incremental integration testing
Combines all the components in advanced.
A set of error is occurred then the correction is difficult because isolation cause is
complex.
ii) Incremental integration testing
The programs are built and tested in small increments.
The errors are easier to correct and isolate.
Interfaces are fully tested and applied for a systematic test approach to it.
Following are the incremental integration strategies:
a. Top-down integration
b. Bottom-up integration
a. Top-down integration
It is an incremental approach for building the software architecture.
It starts with the main control module or program.
Modules are merged by moving downward through the control hierarchy.
Following figure shows the top down integration.
Problems with top-down approach of testing
Following are the problems associated with top-down approach of testing as follows:
Top-down approach is an incremental integration testing approach in which the test
conditions are difficult to create.
A set of errors occur, then correction is difficult to make due to the isolation of cause.
The programs are expanded into various modules due to the complications.
If the previous errors are corrected, then new get created and the process continues. This
situation is like an infinite loop.
b. Bottom-up integration
In bottom up integration testing the components are combined from the lowest level in the
program structure.
The bottom-up integration is implemented in following steps:
The low level components are merged into clusters which perform a specific software sub
function.
A control program for testing(driver) coordinate test case input and output.
After these steps are tested in cluster.
The driver is removed and clusters are merged by moving upward on the program
structure.
Following figure shows the bottom up integration:
3) Regression testing
In regression testing the software architecture changes every time when a new module is
added as part of integration testing.
4) smoke testing
The developed software component are translated into code and merge to complete the
product.
Difference between Regression and smoke testing
Regression testing Smoke testing
Regression testing is used to check defects
generated to other modules by making the changes
in existing programs.
At the time of developing a software
product smoke testing is used.
In regression tested components are tested again to
verify the errors.
It permit the software development team to
test projects on a regular basis.
Regression testing needs extra manpower because
the cost of the project increases.
Smoke testing does not need an extra
manpower because it does not affect the
cost of project.
Testers conduct the regression testing. Developer conducts smoke testing just
before releasing the product.
Alpha and Beta testing
Alpha testing Beta testing
Alpha testing is executed at developers end by the
customer.
Beta testing is executed at end-user sites in
the absence of a developer.
It handles the software project and applications. It usually handles software product.
It is not open to market and the public. It is always open to the market and the
public.
Alpha testing does not have any different name. Beta testing is also known as the field
testing.
Alpha testing is not able to test the errors because
the developer does not known the type of user.
In beta testing, the developer corrects the
errors as users report the problems.
In alpha testing, developer modifies the codes before
release the software without user feedback.
In beta testing, developer modifies the code
after getting the feedback from user.
System testing
System testing is known as the testing behavior of the system or software according to
the software requirement specification.
It is a series of various tests.
It allows to test, verify and validate the business requirement and application architecture.
The primary motive of the tests is entirely to test the computer-based system.
Following are the system tests for software-based system
1. Recovery testing
To check the recovery of the software, force the software to fail in various ways.
Reinitialization, check pointing mechanism, data recovery and restart are evaluated
correctness.
2. Security testing
It checks the system protection mechanism and secure improper penetration.
3. Stress testing
System executes in a way which demands resources in abnormal quantity, frequency.
A variation of stress testing is known as sensitivity testing.
4. Performance testing
Performance testing is designed to test run-time performance of the system in the context
of an integrated system.
It always combines with the stress testing and needs both hardware and software
requirements.
5. Deployment testing
It is also known as configuration testing.
The software works in each environment in which it is to be operated.
Debugging process
Debugging process is not a testing process, but it is the result of testing.
This process starts with the test cases.
The debugging process gives two results, i.e the cause is found and corrected second is
the cause is not found.
Debugging Strategies
Debugging identifies the correct cause of error.
Following are the debugging strategies:
1. Brute force
Brute force is commonly used and least efficient method for separating the cause of
software error.
This method is applied when all else fails.
2. Backtracking
Backtracking is successfully used in small programs.
The source code is traced manually till the cause is found.
3. Cause elimination
Cause elimination establishes the concept of binary partitioning.
It indicates the use of induction or deduction.
The data related to the error occurrence is arranged in separate potential cause.
Characteristics of testability
Following are the characteristics of testability:
1. Operability
If a better quality system is designed and implemented then it easier to test.
2. Observability
It is an ability to see which type of data is being tested.
Using observability it will easily identify the incorrect output.
Catch and report the internal errors automatically.
3. Controllability
If the users controlled the software properly then the testing is automated and optimized
better.
4. Decomposability
The software system is constructed from independent module then tested independently.
5. Simplicity
The programs must display the functional, structural, code simplicity so that programs are
easier to test.
6. Stability
Changes are rare during the testing and do not disprove existing tests.
7. Understandability
The architectural designs are well understood.
The technical documentation is quickly accessible, organized and accurate.
Attributes of 'good' test
The possibility of finding an error is high in good test.
Limited testing time and resources. There is no purpose to manage same test as another
test.
A test should be used for the highest probability of uncovering the errors of a complete
class.
The test must be executed separately and it should not be too simple nor too complex.
Difference between white and black box testing
White-Box Testing Black-box Testing
White-box testing known as glass-
box testing. Black-box testing also called as behavioral testing.
It starts early in the testing
process. It is applied in the final stages of testing.
In this testing knowledge of
implementation is needed. In this testing knowledge of implementation is not needed.
White box testing is mainly done
by the developer. This testing is done by the testers.
In this testing, the tester must be
technically sound.
In black box testing, testers may or may not be technically
sound.
Various white box testing
methods are:
Basic Path Testing and Control
Structure Testing.
Various black box testing are: Graph-Based testing method, Equivalence partitioning,
Boundary Value Analysis, Orthogonal Array Testing.
Basic Path Testing
Basic path testing is proposed by Tom McCabe.
It is a white box testing technique.
It allows the test case designer to obtain a logical complexity measure of a procedural
design.
This measure guides for defining a basic set of execution paths.
The examples of basic path testing are as follows:
Flow graph notation
Independent program paths
Deriving test cases
Graph matrices
Control structure testing
The control structure testing is a wide testing study and also improves the quality of white-box
testing.
The examples of white-box testing is as follows:
1) Conditional testing
All the program module and the logical conditions in the program are tested in conditional
testing.
2) Data flow testing
A test path of a program is selected as per the location of definitions and the uses of variables in
the program.
3) Loop testing
It is a white-box testing technique. Loop testing mainly focuses on the validity of the loop
constructs.
Four different classes of loops are:
1. Simple loops
2. Concatenated loops
3. Nested loops
4. Unstructured loops
The management spectrum
The software project management focuses on four P's. They are as follows:
1. People
It deals with the motivated, highly skilled people.
It consists of the stakeholders, the team leaders and the software team.
2. Product
The product objectives and the scope should be established before the project planning.
3. Process
Process provides framework for creating the software development plan.
The umbrella activities like software quality assurance, software configuration
management and measurement cover the process model.
4. Project
The planned and controlled software projects are managed for one reason. It is known
way of managing complexity.
To avoid the project failure, the developer should avoid a set of common warning,
develop a common sense approach for planning, monitoring and controlling the project
etc.
Problem Decomposition
Problem decomposition is known as partitioning or problem elaboration.
It is an activity present during the software requirement analysis.
The problem is not completely decomposed during the scope of software.
This activity is applied during the two important areas:
First, the information and functionality should be delivered.
Second, the process used to deliver it.
Process Decomposition
A software team must have a significant level of flexibility for choosing the software process
model which is best for the project and populate the model after it chosen.
Following work tasks are needed in simple and small projects for communication activity:
Establish a list of clarification issues.
To address these issues meet the stakeholders.
A statement of scope should develop together. Review it with all concerned and modify it
as needed.
Following work tasks are required in complex project for communication activity:
Review the user or customer request.
Schedule and plan a formal and facilitated meeting with all customers.
Research is conducted to specify the proposed solution and existing approaches.
For the formal meetings, prepare a schedule and working document.
The use cases describe the software from the user point of view.
Review the use case for the consistency, correctness and lack of ambiguity.
Gather the use case into a scoping document, review it with all concerned.
Modify the use cases as needed.
Process and Project Metrics
1. Process Metrics Process metrics are collected over all project and long period of time.
It allows a project manager:
Access the status of ongoing project.
Track the potential risks.
Uncover the problem area before going to critical.
Adjust the tasks.
To control the quality of the software work products evaluate the project team's ability.
2. Project Metrics
On most software projects the first application of project metrics occurs through the
estimation.
Metrics are collected from the previous projects act as base using which effort and time
estimates are created for current software work.
The time and effort are compared to original estimates as a project goes on.
If the quality is improved then the defects are minimized and if the defect goes down,
then the amount of rework needed during the project is also reduced.
Risk Management in Software Engineering
There are two characteristics of risk i.e. uncertainty and loss.
Following are the categories of the risk:
1. Project risk
If the project risk is real then it is probable that the project schedule will slip and the cost
of the project will increase.
It identifies the potential schedule, resource, stakeholders and the requirements problems
and their impact on a software project.
2. Technical risk
If the technical risk is real then the implementation becomes impossible.
It identifies potential design, interface, verification and maintenance of the problem.
3. Business risk
If the business risk is real then it harms the project or product.
There are five sub-categories of the business risk:
1. Market risk - Creating an excellent system that no one really wants.
2. Strategic risk - Creating a product which no longer fit into the overall business strategy for
companies.
3. Sales risk - The sales force does not understand how to sell a creating product.
4. Management risk - Loose a support of senior management because of a change in focus.
5. Budget risk - losing a personal commitment.
Other risk categories
These categories suggested by Charette.
1. Known risks : These risk are unwrapped after the project plan is evaluated.
2. Predictable risks : These risks are estimated from previous project experience.
3. Unpredictable risks : These risks are unknown and are extremely tough to identify in
advance.
Principles of risk management
Maintain a global perspective - View software risks in the context of a system and the business
problem planned to solve.
Take a forward looking view – Think about the risk which may occur in the future and create
future plans for managing the future events.
Encourage open communication – Encourage all the stakeholders and users for suggesting
risks at any time.
Integrate – A consideration of risk should be integrated into the software process.
Emphasize a continuous process – Modify the identified risk than the more information is
known and add new risks as better insight is achieved.
Develop a shared product vision – If all the stakeholders share the same vision of the software
then it is easier for better risk identification.
Encourage teamwork – While conducting risk management activities pool the skills and
experience of all stakeholders.
Risk Identification
It is a systematic attempt to specify threats to the project plans.
Two different types of risk:
1. Generic risks
These risks are a potential threat to each software project.
2. Product-specific risks
These risks are recognized by those with a clear understanding of the technology, the
people and the environment which is specific to the software that is to be built.
A method for recognizing risks is to create item checklist.
The checklist is used for risk identification and focus is at the subset of known and predictable
risk in the following categories:
1. Product size
2. Business impact
3. Customer characteristic
4. Process definition
5. Development environment
6. Technology to be built
7. staff size and experience
Risk Mitigation, Monitoring and Management (RMMM)
Risk analysis support the project team in constructing a strategy to deal with risks.
There are three important issues considered in developing an effective strategy:
Risk avoidance or mitigation - It is the primary strategy which is fulfilled through a
plan.
Risk monitoring - The project manager monitors the factors and gives an indication
whether the risk is becoming more or less.
Risk management and planning - It assumes that the mitigation effort failed and the
risk is a reality.
RMMM Plan
It is a part of the software development plan or a separate document.
The RMMM plan documents all work executed as a part of risk analysis and used by the
project manager as a part of the overall project plan.
The risk mitigation and monitoring starts after the project is started and the
documentation of RMMM is completed.
Software Quality
Software quality is an effective software process applied in a way which creates a useful
product and the product provides measurable value for those who produce and use it.
Software quality factors
Following are the software quality factors: 1. McCall's Quality factors
2. ISO 9126 Quality factors
McCall's Quality Factors
McCall's software quality factors focus on following aspect of a software product.
1. Operational characteristic of software product
Correctness – A program satisfies specification and fulfills the customer requirements.
Reliability – A program is expected to perform the intended function with needed
precision.
Efficiency – Amount of computing resources and the code required by a program to
perform its functions.
Integrity – The efforts are taken to handle access authorization.
Usability – The efforts are needed to learn, operate, prepare input and interpret the output
of a program.
2. Ability to undergo change or the product transition
Portability – Transfer the program from one system (hardware or software) environment
to another.
Reusability – Extent to that a program or a part of a program is reused in other
applications.
Interoperability – The efforts are needed to couple one system to another.
3. Adaptability to new environment or product revision
Maintainability – The efforts are needed to locate and fix an error in the program.
Flexibility – The efforts are needed to modify an operational program.
Testability – The effort needed to test a program to check that it performs its intended
function.
ISO 9126 Quality factors
It is developed in an attempt to recognize the quality attributes.
The standard identifies the following quality attributes:
I. Functionality
II. Reliability
III. Usability
IV. Efficiency
V. Maintainability
VI. Portability
Software Reliability
The probability of failure free program in a specified environment for a specified time is
known as software reliability.
The software application does not produce the desired output according to the
requirements then the result in failure.
Measures of software reliability and availability:
A measure of reliability is Mean Time Between Failure (MTBF).
MTBF= MTTF + MTTR
Where, the MTTF and MTTR are Mean Time To Failure and Mean Time To Repair.
An alternate measure of reliability is Failures In Time(FIT).
A software availability is the probability that a program is running according to the
requirements at a given point in time.
Availability =[MTTF/ (MTTF + MTTR)] * 100%
The availability measure is an indirect measure of the maintainability of the software and
it is more sensitive to MTTR.
Distributed Software Engineering
In distributed system, various computers are connected in a network.
The connected computers communicate with each other by passing the message in a
network.
Distributed system issues
Distributed system is more complex system than the system running on a single
processor.
Complexity occurs because various part of the system are managed separately as in the
network.
Following are design issues considered while designing distributed systems:
Resource sharing - Sharing hardware and software resources.
The openness - System is designed in way that equipment and software from different
vendors are used.
Concurrency - Different users are concurrently accessing the resources from different
locations.
Scalability - The distributed operating system should be scalable to accommodate the
increase service load.
Fault tolerance - The system should continue to function properly after the fault has
occurred.
Aspect-Oriented Software Engineering(AOSE)
The relationship between various program components and their requirement are
complex.
One single component can be used for various requirements. To achieve this, AOSE is
used.
To make programming easier and maintaining reusability AOSE is used.
AOSE supports the separation of concerns into independent elements is the benefit of it.
Separation concerns
For the software design and its implementation the separation concerns are the basic
principle.
These concerns are arranged in a way in the software that perform only one job.
SOA (Software Oriented Architecture)
SOA refers in building a reliable distribution system.
SOA is a collection of services that are communicating with each other.
The communication consists of simple data processing or two or more services
coordinate with some activity.
In SOA, loose coupling between the services interacting each other.
SOA develops distributed system in which the components are stand-alone service.
These services can be executed on different computers from different service provider.
The standard protocols are developed for supporting service communication and
information exchange.
Architecture and services are the two important terms with respect to the SOA.
The architecture includes the description of the overall system like purpose, function
properties and various interfaces.
Business services, entity services, functional services, utility services are the types of
services used in SOA system.
The services are strong reusable business function and these are communicated via
message passing system.
The above figure shows the basic service oriented architecture. It includes the service
provider and service user.
The user requests for a service by sending request message and the service provider reply
message to the service consumer.
The way of defining request and successive response connections is understandable to
both the service provider and service user. This way of defining is achieved using SOAP,
WSDL etc. these are explained in following point SOA and web services.
Advantages of SOA
Services are provided locally or outsource to external providers.
The services are language independent.
SOA and Web services
Web services are a realization of SOA.
SOA is an architectural model which is independent of any technology platform and web
service is the popular approach of SOA.
The web services provide services over the web.
All the computers are able to connect to the internet by building web services on HTTP.
This protocol is used for securing the web service communication.
After taking the decision of protocol for communication need to decide the language for
communication. For that purpose XML is chosen because it is platform independent and
various systems can easily understand it.
Using XML information is an organized in the required way.
To process the unstructured data and display a formatted data, XML is widely used.
In web services, Web Service Description Language(WSDL) is similar to the method
signature. WSDL document is written in XML for the better understanding of any web
service to the users.
WSDL consist of parameters and its data type, methods or functions.
Through an application oriented interface like WSDL the web services are available to
the users.
In web services, method signature is accomplished through Simple Object Access
Protocol (SOAP).
A SOAP message is written in XML and sent to the web service over HTTP for web
service consumption.
The user of the web service will be able to construct the correct SOAP message which is
simply based on the WSDL documents. The reply of message is also in SOAP format.