Achieve Quality Excellency by Engineering Test Strategy...
Transcript of Achieve Quality Excellency by Engineering Test Strategy...
Session Topic: Achieve Quality Excellency by Engineering Test Strategy for Custom Development Projects
Conference Name: 13th Annual International Software Testing Conference (STC 2013)
Author Name: Mohan Kumar B S
Mail Id: [email protected]
Company: SAP Labs India Private Ltd, #138, EPIP, Whitefield, Bangalore - 560066
Abstract
In today’s competitive world, it is highly challenging to achieve customer satisfaction. Main corporate mantra is to
achieve customer satisfaction is by reducing TCO and achieving high quality. Way to achieve high quality is
through quality testing. Typically custom development projects follow different SDLC processes based on the
certain criteria’s. It is very essential to engineer or tailor effective test strategy and test planning to achieve the
product quality, in turn achieve the highest customer satisfaction.
This session mainly focuses on best practices of test strategy and planning such as:
Flavors of custom development projects and quality requirements
Developing test strategy and planning based on the SDLC process
Early testing and review techniques
Adopting value stream mapping
Effort estimation, resource planning and risk management in testing
What could be the best automation approach and when to plan
Introduction to effective intuition based testing like exploratory testing and how to plan/design
Defect consignment techniques
Effective reporting and escalation process
Test case designing with appropriate techniques
How to achieve quality readiness before customer acceptance
How to adopt scrum at testing
Case study on how SAP Custom Development has adopted test strategy and planning processes for customer
projects.
Many of enterprises run globally. So called, enterprise application is a business application, obviously a big business
application which are very much needed for enterprises to run their business in scalable manner. In today’s
corporate environment, enterprise applications are complex, scalable, distributed, component-based, and mission-
critical. They are data-centric, user-friendly, and must meet stringent requirements for security, administration, and
maintenance. In short, they are highly complex systems.
Designing and developing such enterprise applications means satisfying hundreds or thousands of separate
requirements. What’s more, every development decision you make to satisfy each requirement affects many other
requirements, often in ways that are difficult to understand or predict — and the failure to meet any of these
requirements can mean the failure of the entire project!
Achieving highest customer satisfaction is important aspect of every application development.
In today’s competitive world, it is highly challenging to achieve customer satisfaction. Main corporate mantra is to
achieve customer satisfaction is by reducing TCO and achieving high quality. Way to achieve high quality is
through quality testing. Typically custom development projects follow different SDLC processes based on the
certain criteria’s. It is very essential to engineer or tailor effective test strategy and test planning to achieve the
product quality, in turn achieve the highest customer satisfaction.
Many enterprise applications are built on standard requirements or as per the industry specific standards.
Implementing such enterprise applications means there can be N number of new requirements come up to satisfy the
specific enterprise business needs. Volume of custom development projects can vary from small to XXL.
Typical custom solution life cycle as follows:
Some of the Software Development Life Cycles follows:
Iterative Waterfall:
Lean Development:
Agile Development:
Engineering effective test strategy is very much essential.
A test strategy is an outline that describes the testing approach of the Software development life cycle. It is created
to inform project managers, testers, and developers about some key issues of the testing process. This includes the
testing objective, methods of testing new functions, total time and resources required for the project, and the testing
environment.
Test strategies describe how the product risks of the stakeholders are mitigated at the test-level, which types of test
are to be performed, and which entry and exit criteria apply. They are created based on development design
documents. System design documents are primarily used and occasionally, conceptual design documents may be
referred to. Design documents describe the functionality of the software to be enabled in the upcoming release. For
every stage of development design, a corresponding test strategy should be created to test the new feature sets.
Defining Test Objective, Test Scope and Project Methodology:
Define the purpose for designing and executing a test. Test Scope is based on the specification. Test Scope includes
New Scope, Out of Scope and Existing Functionalities from the standard software.
It will be appropriate to explain the project methodology followed. This will in turn decide the test planning.
Define Test Levels/Types:
The test strategy describes the test levels to be performed.
Test 1: Static Test Why Static Test?:Static testing is performed to prevent bugs early in development cycle.
Implementation approach:
Test Owner:
Tools/Technique:
Test 2: UT (Unit
Testing)
Why UT?:Unit testing is performed to determine that individual program modules perform
as per the design specifications.
Goal: The purpose of the Unit Test is to check and verify that development objects are
behaving as per the design and Requirement Specifications.
Implementation approach:
Test Owner:
Test System:
Tools/Technique:
Link to Unit Test Cases:
Link to Unit test results:
Test 2: MIT
(module Integration
Testing)
Why MIT?:This is performed to determine that how well the program modules get along.
Goal: is to test the cross-functional integration points as well as end-to-end business
processes.
Scope: The scope will cover all development as per the SOW.
Implementation approach:
Test Owner:
Test System setup /Test data:
Test Resources:
Tools/Technique:
Link to MIT Test Cases:
Message Handling:
Test 3: ST (System
Testing)
Why ST?:This is performed to determine that how well the application works in integrated
system.
Goal: Testing an integrated system to verify that it meets specified requirements.
Scope: The scope will cover all development as per the SOW.
Implementation approach:
Test Owner:
Test System setup /Test data:
Test Resources:
Tools/Technique:
Link to ST Test Cases:
Message Handling:
Test 4: AT
(Customer
Acceptance Test)
Goal: To gain acceptance of all functionality from the customer. Customer Acceptance Test
shall verify that the system meets customer requirements as specified.
Scope: < as discussed with customer and based on the specification document>
Test Owner: <point of contact provided by customer>
Message Handling:
Test Design Techniques:
The objective of using Test design techniques is as follows:
Understand and apply test techniques during design or test execution
Learn to choose appropriate testing techniques in specific circumstances
Using these techniques internally will lead to better tested software and have a positive impact on
efficiency and effectiveness of testing resulting in higher quality of our projects / products.
The choice of which test techniques to use depends on a number of factors:
Customer requirements
level/type of risk
knowledge of the testers
time and budget
previous example of type of defects found
development life cycle, use case models
Test
Design
Techique
Key Idea Method Applicability
All-Pairs Test
combinatorics
involving a
large set of
parameters
Systematic "pair-
coverage" of large
parameter space - find
70-90% of defects
while reducing number
of possible test
combinations
Parameters semi-
coupled
Large number of
parameters and
values
Boundary
Value
Analysis
Optimized test
cases / test
coverage: Test
boundary value
conditions in
addition
Find boundary values
which generally have a
high likelihood of
defect occurrence
Any
range/equivalence
class
Decision
Table
Specify
exhaustive test
by table
Full spec of input-
>output mapping
All parameters are
independent
Exhaustive test of all
combinations
Small number of
parameters and
values
Equivalence
Class
Partioning
Optimized test
cases / test
coverage: Single
value represents
the whole set
Find values that are
"treated the same" and
only test these without
loss of fidelity in test
Parameters with
large value/set
range
Intuitive Test Approach:
Plan appropriate intuitive test approaches based on the situation. Example - Exploratory Testing. These kind of test
approached can identify the hidden issues, which could not be identified by normal test approached and standard test
cases.
Roles & Responsibilities:
The roles and responsibilities of test leader, test manager, individual testers, project manager, quality manager are to
be clearly defined. This may not have names associated: but the role has to be very clearly defined.
Testing strategies should be reviewed by the developers. They should also be reviewed by test leads for all levels of
testing to make sure the coverage is complete yet not overlapping. Both the testing manager and the development
managers should approve the test strategy before testing can begin
Role Responsibility with regards to test activities
Project Lead SAP Primary contact between the Customer and the Project development team
Primary contact for all change control requests (if any)
Test coordinator SAP Primary contact for all testing activities
Responsible for coordinating the test activities, resources and reports at SAP
Responsible for implementing the Defect Management Process together with Customer
Architect / Responsible for the development activities within the project
Technical Lead Primary contact for test cases/scenarios
Authorized to set and change the defect fix priorities with the customer’s agreement
during ‘defect review meetings’
Responsible for proving each transport to ensure that it is stable enough for testing
activities to proceed
Accountable for message dispatching
Tester Executes the test cases and provide test feedback in accordance with the agreed tools
Submits detailed information regarding issues and defects
Verifies defects that have been fixed
Developer Responsible for reviewing defects and for giving accordant status feedback
Developers are required to resolve defects in accordance with the project coding
standards
System administrator Responsible for availability of the development and test systems
Project Lead
Customer
Primary contact between the test group and SAP
Primary contact for all change control requests (if any)
Test coordinator
Customer
Primary contact for all testing activities
Responsible for coordinating the test activities, resources and reports at Customer
Subject Matter
Experts
Create test cases that validate listed functions on the Specification, based on Customers’
business process scenarios.
Data customization tasks for the development / test systems.
Creates and input configuration, master data, transactional data and pre-staged test data.
Tester Executes the test cases and provide test feedback in accordance with the agreed tools.
Submits detailed information regarding issues and defects
Verifies defects that have been fixed
System administrator Responsible for availability of the development and test systems
Testing Tools:
There are two methods used in executing test cases: manual and automated. Depending on the nature of the testing,
it is usually the case that a combination of manual and automated testing is the best testing method.
Automation: Think of ROI
Risks & Mitigation:
Any risks that will affect the testing process must be listed along with the mitigation. By documenting a risk, its
occurrence can be anticipated well ahead of time. Proactive action may be taken to prevent it from occurring, or to
mitigate its damage. Sample risks are dependency of completion of coding done by sub-contractors, or capability of
testing tools.
Non Functional Tests:
Non-functional testing is the testing of a software application for its non-functional requirements.
Non-functional testing includes:
Documentation testing
Endurance testing
Load testing
Localization testing and Internationalization testing
Performance testing
Security testing
Scalability testing
Stress testing
Usability testing
Volume testing
Accessibility testing
As quality as concerned, important non-functional tests should be planned. If any specific non-functional
requirements are part of customer requirements or specification, then exhaustive testing should be planned
and included into test scope.
Test Schedule/Test Planning:
A test plan should make an estimation of how long it will take to complete the testing phase. There are many
requirements to complete testing phases. First, testers have to execute all test cases at least once. Furthermore, if a
defect was found, the developers will need to fix the problem. The testers should then re-test the failed test case until
it is functioning correctly. Last but not the least, the tester need to conduct regression testing towards the end of the
cycle to make sure the developers did not accidentally break parts of the software while fixing another part. This can
occur on test cases that were previously functioning properly.
Example:
Exploratory testing :
All testers perform ET testing for weekly 2 hrs
Plan to cover all important test cases with at least 2 ET tours in 8 hrs
Test Day :
Dedicated 4 test days planned within MIT
At least 30 to 40% code coverage(SCOV)
Quality Gates/Quality Checks:
Quality Gates can be introduced for every phases of development. Such as Design to Development, Development to
Testing, etc. As part of quality gates, quality checks will be performed.
Example :
Development to Testing –
Development Completion
No static check errors
Unit Testing Completion
Non Functional Tests coverage as part of Unit Testing
Demo of complete application
Code Review by internal and external architects
Unit test results documented and no open issues
MIT Strategy creation and roll-out
Test system readiness
Priority and Severity Definition as part of test strategy:
Document and communicate SLA for Priority and Severity of messages
Example:
Very high:
A message should be categorized with the priority "very high" if the problem has very serious consequences for normal business processes. Urgent work cannot be performed. This is generally caused by the following circumstances:
A productive system is completely down. The imminent go-live or upgrade is jeopardized. The customer's core business processes are seriously affected. A workaround is not available.
The message requires immediate processing because the malfunction may cause serious losses.
High:
A message should be categorized with the priority "high" if normal business processes are seriously affected. Necessary tasks cannot be performed. This is caused by incorrect or inoperable functions in the system that are required immediately.
The message is to be processed a quickly as possible as a continuing malfunction can seriously disrupt the entire productive business flow.
Medium:
A message should be categorized with the priority "medium" if normal business processes are affected. The problem is caused by incorrect or inoperable functions in the system.
Low:
A message should be categorized with the priority "low" if the problem has little or no effect on normal business processes. The problem is caused by incorrect or inoperable functions in the system that are not required daily, or are rarely used.
Requirements traceability matrix (RTM)
Ideally, the software must completely satisfy the set of requirements. From design, each requirement must be
addressed in every single document in the software process. The documents include the HLD, LLD, source codes,
unit test cases, integration test cases and the system test cases. In a requirements traceability matrix, the rows will
have the requirements. The columns represent each document. Intersecting cells are marked when a document
addresses a particular requirement with information related to the requirement ID in the document. Ideally, if every
requirement is addressed in every single document, all the individual cells have valid section ids or names filled in.
Then we know that every requirement is addressed. If any cells are empty, it represents that a requirement has not
been correctly addressed.
Test Status Collections and Reporting
When test cases are executed, the test manager and the project manager must know where exactly the project stands
in terms of testing activities. To know where the project stands, the inputs from the individual testers must come to
the test manager. This will include, what test cases are executed, how long it took, how many test cases passed, how
many failed, and how many are not executable. Also, how often the project collects the status is to be clearly stated.
Some projects will have a practice of collecting the status on a daily basis or weekly basis.
Test Entry and Exit Criteria Definition
It is very essential to define Test Entry Creation to know when to start testing and Exit Criteria to know when to stop
testing.
Entry Criteria: Test system readiness, Unit test completion, no static check errors, test data availability
Exit Criteria: No P1/P2 issues open, etc
Test Report/Summary
The management may like to have test summary on a weekly or monthly basis. If the project is very critical, they
may need it even on daily basis. This section must address what kind of test summary reports will be produced for
the senior management along with the frequency.
The test strategy must give a clear vision of what the testing team will do for the whole project for the entire
duration. This document can be presented to the client, if needed. The person, who prepares this document, must be
functionally strong in the product domain, with very good experience, as this is the document that is going to drive
the entire team for the testing activities. Test strategy must be clearly explained to the testing team members right at
the beginning of the project.
References:
http://service.sap.com/public/customdev
scn.sap.com