Test Strategy Template

32
Project xxxx TEST STRATEGY DOCUMENT NAME & LOCATION: DOCUMENT VERSION: DATE: READERSHIP: SUMMARY: Amendment History Version Date Comment By Approved V0.1 Associated Documents (This document should be read in conjunction with): Title of Document Version No/File Name Date Approval Approver Name Date Project Manager Page 1 of 32

Transcript of Test Strategy Template

Page 1: Test Strategy Template

Project xxxx

TEST STRATEGY

DOCUMENT NAME & LOCATION:

DOCUMENT VERSION:

DATE:

READERSHIP:

SUMMARY:

Amendment History

Version Date Comment By Approved

V0.1

Associated Documents (This document should be read in conjunction with):

Title of DocumentVersionNo/File Name

Date

Approval

Approver Name Date

Project Manager

Page 1 of 25

Page 2: Test Strategy Template

C O N T E N T S

1. Introduction..................................................................................................................................... 3

1.1 Context................................................................................................................................... 3

1.2 Purpose.................................................................................................................................. 3

1.3 Scope to be Tested................................................................................................................3

1.4 Out of Scope (Not Tested)......................................................................................................3

2. Testing Approch.............................................................................................................................. 3

2.1 Purpose.................................................................................................................................. 3

2.2 Test Objectives....................................................................................................................... 3

2.3 Traditional Testing Approach..................................................................................................3

2.4 Overview of Test Phases........................................................................................................3

2.4.1 Component (unit) Testing...............................................................................................3

2.4.2 System Functional Testing.............................................................................................3

2.4.3 End to End (E2E) Testing...............................................................................................3

2.4.4 Technical (non-functional) Testing..................................................................................3

2.4.5 User Acceptance Testing (UAT).....................................................................................3

2.4.6 Operational Acceptance Testing (OAT)...........................................................................3

2.4.7 Regression Testing........................................................................................................3

2.5 Proposed Test Approach........................................................................................................3

2.5.1 Release Schedule..........................................................................................................3

2.5.2 Testing Schedule............................................................................................................3

2.6 Risk Approach........................................................................................................................ 3

3. Test Deliverables............................................................................................................................ 3

3.1 Testing Deliverables...............................................................................................................3

3.2 Detailed Test Plans................................................................................................................. 3

3.3 Test Scripts............................................................................................................................ 3

3.4 Test Progress Reporting.........................................................................................................3

4. Test Management........................................................................................................................... 3

4.1 Resource Management...........................................................................................................3

4.2 Assumptions and Dependencies.............................................................................................3

5. Defect Management........................................................................................................................ 3

5.1 Defect Management Approach................................................................................................3

5.2 Defect Status and Process.....................................................................................................3

5.3 Defect Severity....................................................................................................................... 3

5.4 Defect Priority......................................................................................................................... 3

5.5 Test Progress Reporting Metrics.............................................................................................3

6. Test Tools...................................................................................................................................... 3

6.1 Introduction............................................................................................................................ 3

6.2 Overview of Testing Tool........................................................................................................3

6.3 Test Tool Requirement and description...................................................................................3

APPENDIX A – Example Testing Risk Log..............................................................................................3

APPENDIX B – Example Detailed Test Phase Description.......................................................................3

APPENDIX C – Test Plan Contents.........................................................................................................3

APPENDIX D – Testing Roles and Responsibilities...............................................................................3

Page 2 of 25

Page 3: Test Strategy Template

1. INTRODUCTION

1.1 Context

Project context

1.2 Purpose

This document sets the strategy for all testing within the scope of the project

This document describes:

the test approach

test phases

principles governing testing activities

The delivery of the solution and the overall business strategy are excluded from the scope of this document.

1.3 Scope to be Tested

The following key components (sub-systems) will be tested:

All aspects of the non-functional requirements

1.4 Out of Scope (Not Tested)

The following features and attributes will NOT be tested:

2. TESTING APPROCH

2.1 Purpose

This subsection describes the testing approach that will be adopted by the project

Page 3 of 25

Page 4: Test Strategy Template

2.2 Test Objectives

The test objectives are:

- To demonstrate that the solution meets all requirements

- To identify Defects (faults and failures to meet the actual requirements) with an agreed rectification plan

- To mitigate risk and demonstrate that the release is fit for purpose and meets user expectations.

2.3 Traditional Testing Approach

The traditional approach to testing uses the "V" model, which maps the types of test to each stage of development as per the simplified diagram below:

It shows that for each requirement, specification or design documentation, there is an associated testing phase (i.e. Component design is associated with Component testing

Where possible, testing should be carried out according to the V-Model approach using the Requirements Traceability Matrix as a key input to Test design and planning

2.4 Overview of Test Phases

List here the key phases of testing, eg:

- Component (unit) Tests

- System Functional Tests

- End to End Process (E2E) Tests

- Technical (Non-Functional) Tests

Page 4 of 25

User Requirements

Functional Specification

System Design

Component Design

Component Build

System Functional Testing

End to End Testing

User Acceptance Testing

Component Testing

Page 5: Test Strategy Template

- User Acceptance Tests

- Operational Acceptance Tests

Each Test Phase outlined below should be described, including the following details:

- Owner

- Objective of the phase

- Test Approach; execution, environments, data, resources & location

- Scope

- Exclusions

- Entry & Exit criteria

- Sign-off procedures

- Testing tools to be used

2.4.1 Component (unit) Testing

This is the testing that is carried out within the early stages of the development lifecycle:

Describe here the key components and the Owners (eg THE CLIENT team, Vendor etc) that is responsible for testing the component

2.4.2 System Functional Testing

System Functional Testing is the testing of the core functional areas of the system against the agreed requirements and technical documents.

All the System Functional Tests to be carried out should be documented in the Detailed System Functional Test Plan to be produced before testing begins.

2.4.3 End to End (E2E) Testing

Once all the functional areas have been successfully tested, the next phase of testing will be the End to End process testing. End to End (E2E) testing covers the testing of the full end-to-end processes - as defined in the process-model

The key difference between the End to End Testing and the System Functional Testing is that in the E2E Testing we are primarily validating the process with the appropriate functions and not just the discrete functions

All the E2E processes to be tested will be documented in the E2E Detailed Test Plan.

2.4.4 Technical (non-functional) Testing

Technical (non-functional) testing will primarily cover Performance, Volume and Scalability of the solution. The testing will be based on the requirements, technical and process documents.

Non-functional requirements should have been gathered in the Requirements Traceability Matrix.

A range of test volume scenarios will be specified in the Non-Functional Testing Detailed Test Plan.

Page 5 of 25

Page 6: Test Strategy Template

The scenarios will be comparable with the expected operational volumes. A set of exceptional volume Tests will also be specified to demonstrate the robustness of the solution in exceptional volume conditions.

A subset of these tests will also be executed (i.e. re-run) as part of the Operational Acceptance Testing (OAT)

2.4.5 User Acceptance Testing (UAT)

User Acceptance Testing (UAT) is the testing that is conducted by the End User Representatives to ensure that the delivered system meets the user defined functional requirements.

It is expected that the User Representatives will select a subset of tests from the System Functional and E2E test scripts.

These tests will be documented in the UAT Detailed Test Plan by the Test Analysts in advance of the execution of the UAT. During the execution of UAT, the User Representatives will also be allowed an opportunity to carry out un-documented tests.

Once the UAT tests are successfully completed, UAT can be signed off by the business team (and including the SPA)

2.4.6 Operational Acceptance Testing (OAT)

Operational Acceptance Testing is the is the last major test phase and is executed on the final implemented solution to confirm that it can be supported and meet operational support requirements as agreed in the Support Model;

Once these tests are passed, the solution can be promoted to operational status.

If there are any unresolved priority 1 or priority 2 defects, the Application Manager may reserve the right not to accept the system into operational support.

2.4.7 Regression Testing

Regression testing becomes necessary when:

- A new release or bug fix is delivered following the resolution of an Defect;

- Enhancements to the functionality are incorporated in the system; or

- The technical environment is altered.

Regression Testing is performed by re-running a selected set of the test scripts chosen according to the nature of the change. All test scripts will be designed to be re-run as necessary.

( please note that regression testing tends to carried out as part of the above phases and is not a separate testing phase on its own )

2.5 Proposed Test Approach

Outline here the likely sequence of testing:

2.5.1 Release Schedule

The following table outlines the delivery schedule of different code releases:

Page 6 of 25

Page 7: Test Strategy Template

2.5.2 Testing Schedule

Outline below the proposed high-level schedule for testing:

( a detailed test plan should be produced early in Execute)

2.6 Risk Approach

It is often impractical to perform a full exhaustive set of tests for a solution since this would be very costly in terms of both money and time and because the vendors should have tested their products prior to release to THE CLIENT.

The objective is to optimise the testing resources and reduce test time without compromising the quality of the final solution.

Therefore, all test major test activities will carry risks, and an impact and likelihood analysis should be carried out to validate the choices being made

List all key Testing risks below:

.

Page 7 of 25

Page 8: Test Strategy Template

3. TEST DELIVERABLES

3.1 Testing Deliverables

This section details the type and structure of the test documentation that needs to be produced. The following is a list of documents that will be delivered as part of the testing activities:

- Detailed Test Plans

- Test Scripts for all test phases

- Testing Progress reporting

The following sub-sections provide an overview of each of the key deliverables.

3.2 Detailed Test Plans

The core of each Detailed Test Plan is based on the requirements, design documentation and other non-functional criteria.

The Detailed Test Plan will document the test method to be adopted for the testing of the Testing phase.

The Detailed Test Plan should cover:

System Functional Testing

Technical (non-functional) Testing

End to End Process Tests

User Acceptance Testing

Operational Acceptance Testing

Within the Detailed Test Plan, a full description of the following should be provided:

the test environment

all required test scripts

test data

interfaces (Integration) required.

Once the Detailed Test Plans have been approved, the test scripts can be documented.

3.3 Test Scripts

A test script describes in detail how the test is conducted and what results are expected.

A single test script may cover one or more requirements. However, typically a single requirement is broken down into sub-requirements/test conditions. This allows the Testers to show exactly how

Page 8 of 25

Page 9: Test Strategy Template

requirements have been covered by the test scripts and enables the Testing team to track issues related to specific test scripts.

Each test script will detail the following:

- Test Name – A unique reference number followed by the test name identifying the test

- Requirement cross reference - A reference to the requirement(s) and source documentation

- Revision History - with original, review and update details related to specific changes to the test

- Prerequisites – reference to any scripts that need to be run before individual scripts can be executed.

- Test Description - A summary description of the purpose of the test

- Test Data – The test data to be used

- Test Steps – The instructions for running the test, e.g. the actions that need to be performed in order to exercise the piece of functionality being tested

- Expected Results – A definition of the test results that expect to be observed if the test is successful. Enough information should be supplied to enable the tester to determine unambiguously whether or not the test has been passed

- Actual Results – The Actual results that were observed and a reference to any test evidence. As a rule the tester will store evidence of the test results where possible. This will include a record of the build being tested, whether the test passed or failed and a list of any test observations raised

- Pass / Fail - A record of whether the test was passed or failed.

Page 9 of 25

Page 10: Test Strategy Template

3.4 Test Progress Reporting

Progress reports will be produced at regular intervals (typically weekly).The report will show:

- Test Phase

- System Under Test

- Test environment

- No of total tests

- No of tests completed

- No of tests passed

- No of tests failed

Where appropriate, a detailed report highlighting all outstanding risks and potential business and/or operational impacts will also be produced.

4. TEST MANAGEMENT

4.1 Resource Management

The following is a list of all the key testing roles and core responsibilities that are required during the testing phase:

- Test Manager – responsible for all project testing

- End to End) Test Manager – responsible for the E2E Test activities

- Test Phase Team Lead – responsible for input into the test phases

- Test Analyst – responsible for documenting and executing the tests

- Technical Test Analyst – responsible for technical tests

Depending on the scale and nature of the system (i.e. provided by an external vendor), it may be possible to combine all the roles so that combination of a Test Manager and Test Analysts should be able to fulfil all the testing responsibilities.

List the key resources here:

Role Organisation / team Name

Page 10 of 25

Page 11: Test Strategy Template

4.2 Assumptions and Dependencies

Assumptions

List here any assumptions e.g.

- The vendors are responsible for fully testing their software before it is released to THE CLIENT.

- Vendors are available to review any test results and defects that the team feel may be associated with the product software

- It is expected that all users are on IE 7+.

- The project Business Analysts are available to input into the creation of the test cases.

- The test documentation will be created by the test analysts.

Dependencies

List any key dependencies e.g :

- Build and component testing delivered on time and to a reasonable quality. (I.e. all entry criteria met and system is stable during the first week of test execution).

- Provisioning of the appropriate environments for each phase of testing.

- Utilisation and support of instances of the Test tool

- Service Level Agreements in place for performance testing.

- Service Level Agreements in place for the testing environments.

Page 11 of 25

Page 12: Test Strategy Template

5. DEFECT MANAGEMENT

5.1 Defect Management Approach

- Defect management requires the Testing team to document and track (i.e. with audit trail) all defects that have been raised, resolved and that remain open.

- Provides transparency across the project and management on defect status and priorities

5.2 Defect Status and Process

The following table shows the statuses of a defect:

Status Description

Identified A new incident is identified.

Assigned An owner has been agreed and a fix is being created

Fixed Development (i.e. Vendor) has a fix for the defect.

Released For Retest When the fix is released (i.e. code drop by the vendor) for the test team to re-test

Closed Fix has been successfully tested or it is agreed no action is required.

All logged Defects should contain the following information:

- A unique identifier (defect number)

- Title for the defect

- Test Phase and test number that identified the defect

- System Area – functional area this defect impacts (best estimate)

- The severity classification of the defect

- Estimated Fix Time - an estimated timescale for resolution (determining the impact on testing)

- A full description of the Defect and how to recreate the defect

- An indicator of the status of the Defect.

- Level of risk on Go-Live

Wherever possible, the description of the Defect will be written in non-technical terms or the impact of the Defect will be described in non-technical terms.

Defects will be logged in the following situations:

- When the actual result does not match the expected result and the expected result is correct

- When an expected result does not match an actual result but the actual result is found to be correct. In this case the action will be to correct the expected result and the Defect log will provide an audit trail

- When there is an unexpected outcome to a test that is not covered by the expected result. This may result in the creation of a new entry in the requirement catalogue

- When a Defect is raised to which no immediate acceptable response is available.

Page 12 of 25

Page 13: Test Strategy Template

Once the project enters the System Test execution phase, typically each morning during test execution, the Testing Team will review all Defects raised since the previous meeting to determine any conflicts or impacts across the various phases of test.

After each review session, the status of the defect will be updated and any re-testing of the defect fix and regression testing will be carried out under the guidance of the Test Manager.

Page 13 of 25

Page 14: Test Strategy Template

The following flow chart provides an overview of the Defect management process.

Page 14 of 25

Assign

defect

Defect fixed

Defect

Closed

Fix applied

and re-tested

Raise Defect

pass

fail

Page 15: Test Strategy Template

5.3 Defect Severity

The table below describes the levels of defect severity

Severity Description

1 - Critical

Entire system or key business process is unusable or does not meet the needs of the business, many users affected and no workaround is available; or,

Corruption or loss of data occurs that is not immediately recoverable and prevents the business from continuing

2 - High

Part of the system of key business process is unusable or does not meet the needs of the business, few users affected but a workaround is available; or,

Corruption or loss of data occurs that is immediately recoverable and allows the business to continue

3 - MediumA non-critical Defect occurs, typically affecting a single user. The Defect affects the ability to provide the best service, but there is a workaround

4 - LowCosmetic errors, documentation anomalies, requests for information or advice required

5.4 Defect Priority

This table describes the levels of defect priority

Priority Description of the Impact on the Testing Activity

1 - Emergency

Incident that prevents all testing from continuing. All testing is suspended.

Target resolution: within 4 hours

2 - High

Incident that severely impacts testing but testing is able to continue, possibly with a work around. Testing of particular function(s) is possibly suspended.

Target resolution: within 24 hours

3 - Medium Incident that inconveniences testing progress. Testing is able to continue without much impact. Testing of a single function is possibly suspended. A test script of procedure error that requires a fix.

Target resolution: within 3 days

If this defect can’t be resolved in the specified period, the level of risk

Page 15 of 25

Page 16: Test Strategy Template

on Go-Live will be assessed

4 - LowIncident has little or no impact on testing progress.

Target resolution: as agreed.

5.5 Test Progress Reporting Metrics

The Key Performance Indicator that will be used to measure the success of testing is:

- Test Execution:

o Number of Planned Test Cases (total)

o Number of Planned Test Cases (Cum)

o Number of Passes Test Cases (Cum)

o Number of Failed Test Cases (Cum)

o Number of Test Cases in Progress (Cum)

- Defects

o Total defects raised (and by priority)

o Total defects fixed (and by priority)

o Total defects in progress (and by priority)

o Total defects closed (and by priority)

o Total defects by functional area

o Defect severity by Root cause

o Defect severity by application

o Defect severity by Defect Type

o Defect state by application

Page 16 of 25

Page 17: Test Strategy Template

6. TEST TOOLS

6.1 Introduction

This section describes the types of tools that are required to manage the testing activities contained within this document.

6.2 Overview of Testing Tool

Describe here which tool is going to be used, and how it allows the user to organise and manage the testing activities.

allows the user to catalogue the requirements

specifies tests to be executed to validate the requirement

allows the logging of the test results.

Provides a reporting function that provides management reports and metrics on the testing progress.

6.3 Test Tool Requirement and description

The following table shows the test tool(s) that will be used to support the testing activities:

Page 17 of 25

Page 18: Test Strategy Template

APPENDIX A – EXAMPLE TESTING RISK LOG

Ref

Risk Prob Impact

Owner Mitigation

1. Test environment availability for all required testing

H H Test Manager

Ensure that Test Environments are documented and provisioned well in advance of the Test execution phase for each of the Projects in scope.

2. Resource constraints for test preparation and execution

M H Project Manager

Management – plan resource requirements for both Test preparation and Test execution phases with sufficient time to secure additional resource where required.

3. Late changes in scope

H M Project Manager

Advance notice of changes impacting on Any in-scope project can feed into any required re-prioritisation. Contingencies to be considered for potential delays

4. Inter-dependencies between projects streams could hinder progress on a single deliverable required for test preparation or execution

H M Project Manager

Where applicable Test harness’ will be created and managed by each distinct project but the harness should closely represent the source of target system.

5. External Inter-dependencies with vendors with late delivery could severely hinder progress.

H M Project Manager

Ensure that a close relationship is maintained with external dependency partners and make provision for delays when encountered.

6. Test documentation, Detailed Test Plan’s not approved prior to the scheduled Test start date

L H Test Manager

Testing Team to ensure that all Test documentation is approved prior to commencement as this is a key part of the Entry Criteria to each Test phase.

7. Infrastructure components Tested in isolation may not fully prove the validity of the solution adopted

M H Test Manager

The System Integration Test releases will clarify this point but the sooner the solution components are Tested together the better.

Page 18 of 25

Page 19: Test Strategy Template

APPENDIX B – EXAMPLE DETAILED TEST PHASE DESCRIPTION

System Functional Testing

Item Description

Accountability Test Manager

Responsibility Test Manager

Objectives The objective of System Testing is to:

Verify that the whole system performs as described in the functional & technical specification documents.

Approach Location:

The System testing will conducted …

Scope The Testing Team in conjunction with the users and Project team members define the scope of System Functional Testing.

The following test types are in scope for this phase of the testing:

Functional Testing

Usability (User Interface)

Security Testing

Error handling

Regression (if applicable)

User performance (response time) Testing

Exclusions The following exclusions will apply:

Some interfaces may not be available to test against

Penetration testing

Entry Criteria The following entry criteria must be met before the commencement of System testing:

100% of agreed functionality has been delivered (subject to the functionality contained in the release being tested)

Vendors have fully tested their developments and are formally delivering the software to THE CLIENT (this will include the installation of the software )

System Functional Test Plan has been reviewed and signed off by the agreed reviewers and approvers. This will primarily be the Project Team members

System Functional Test Scripts completed and approved

All components of the solution correctly configured in the System Test environment by the vendors .

Any test data either pre-loaded or available to load as required

Version, Release, Configuration and Defect Management tools and process defined and implemented

Page 19 of 25

Page 20: Test Strategy Template

Item Description

System Configuration documented and approved.

Entry Criteria will be assessed in the prior to test execution. Variances will be noted and documented by the Test Manager and System Test Team Lead in a report along with a risk assessment and recommendation to proceed. Where entry criteria have not been met the decision to proceed with test execution is up to the discretion of the:

IT&S Project Manager

Exit Criteria The System Functional Testing is completed when:

100% of pre-agreed system test cases have been executed. All high priority test cases have passed successfully.

All defects found are recorded.

All severity 1 & 2 defects are resolved, retested and closed.

All severity 3 & 4 defects outstanding have documented workarounds and an agreed (between business, development, testing teams and vendors) schedule of when they will be corrected.

A set of pre-defined regression tests have been re-run after fixes applied and no new errors and/or Defects identified during regression tests

Component tested code, executables and s/w configuration under version control

Test Summary Report is completed and signed off

Sign-off Completion is achieved when the exit criteria have been met. E-mail sign-off of the System Test, Test Summary Report (TSR) is performed by the Approvers outlined in the Testing Strategy and System Test Plan.

Tools The Test Manager is responsible for monitoring progress of the System Testing and ensuring all tests and results are documented.

Test cases will be entered into Test Director or Excel & then executed. Actual results will be compared to expected results. Test Results (passed or failed) are logged in Test Director or Excel along with any defects found. Status reports will be prepared from Test Director or Excel.

Page 20 of 25

Page 21: Test Strategy Template

APPENDIX C – TEST PLAN CONTENTS

The Test Plan will have the following contents

Test Plan Identifier

Unique identifier associated to this Test Plan document

Approvals

Names and titles of all persons who must approve this Test Plan

Introduction

Objective of this Test Plan

Background (summary of the project)

Scope of the Testing phase this Plan relates to

References to source material i.e. Project Plan, Configuration Management Plan, Policies & Standards

Test Items

Identification of the Test Items, including version/revision levels

References to all source documentation e.g. Requirements Specification, Design Specification, Functional/Technical Specification, Technical Solution docs, User Guide, Operations Guide

Features to be tested

Identification all hardware features, and all software features and combinations of software features to be tested (descriptions of functionality and activities to be tested)

Features not to be tested

Identification all features and combinations of features that will not be tested and the reasons

Approach

Description of the overall approach to Testing. For each major group of features or feature combinations, specifying the approach that will ensure that these feature groups are adequately tested

Specification of the major activities, techniques and tools that will be used to test the designated groups of features

The approach will be described in sufficient details to identify the major testing tasks

Identification of the techniques that will be applied to judge the comprehensiveness of the testing effort

Lists of both the Entry and Exit Criteria for the Tests

Any additional completion criteria e.g. error frequency

The techniques to be used to trace requirements will be specified

Item pass/fail criteria

The criteria to be used to determine whether each test item has passed or failed testing, and the Severity / Priority assigned to each class of Defect

Suspension criteria and resumption requirements

The criteria to be used to suspend all or a portion of the testing activity on the test items associated with this Plan

The testing activities that must be repeated, when testing is resumed

Page 21 of 25

Page 22: Test Strategy Template

Test Deliverables

Identification of the deliverable Test documents i.e. Test Plan, Test Specifications, Test Scripts, Test Logs, Test Defect Reports, Test Completion Report, Test Data

Testing Tasks

The set of tasks necessary to prepare for and perform testing (Task, predecessor, responsibility, effort, end date)

Identification of inter-task dependencies and any special skills required

Environment needs

Specification of both the necessary and desired properties of the Test environment

The physical characteristics of the environment including the hardware, communications and system software, the mode of usage, and any other software or supplies needed to support the test

The level of security that must be provided for all components of the Test environment, i.e. hardware, software and data

Identification of the tools needed

The office space and desks etc required for the Test team

Responsibilities

Identification of the groups responsible for managing, designing, preparing, executing, witnessing, checking and resolving

Identification of the groups responsible for providing the Test Items and Environment Needs identified earlier

Staff and training needs

Specification of the Test staffing by skill level

Training requirements and options for providing necessary skills

Schedule

Test milestones identified in the Project plan

Any additional Test milestones needed

Estimates for the time required to perform each Testing task

The schedule for Testing tasks and Test milestones

Risks

Identification of the high-risk assumptions of the Test Plan

NOTES:

Within the Test Plan, a full description will be provided of the test environment, all required test scripts and harnesses, and all interfaces required with third party systems. Where the environment is not fully representative of the live environment, the reasons for the limitation will be provided and a risk assessment undertaken to determine the impact of this on the validity of the results obtained during the tests.

The Test Plan will also specify the input data sources and any expected outputs, including volumes and types of data. An explanation will be provided for each data type and flow relating it to the predicted or measured live environment

Page 22 of 25

Page 23: Test Strategy Template

APPENDIX D – SAMPLE TESTING ROLES AND RESPONSIBILITIES

The following table outlines the test team roles and their responsibilities:

Testing Role Responsibility/Accountable

Test Manager Responsible for producing the Test Strategy.

Deliver the High Level Test Plan to be utilised for the delivery of detailed Test Plans

Deliver Detailed Test Plan for all the respective test areas

Recruitment of the Test Team (e.g. Test Analysts)

Accountable for Phase Test Plans e.g. ST, UAT, OAT etc.

Leading the end-to-end testing effort as outlined in this Test Strategy document (ST, UAT, OAT etc).

Management of all testing resources.

Testing management reporting

Responsible for creating and maintaining the test project plan for all core testing activities (as baselined in MS Project)

Responsible for ensuring the agreed delivery of all project testing deliverable’s (as baselined).

Responsible for estimating, planning and ensuring appropriate level of resourcing for the project testing efforts.

Responsible for managing all project testing related issues, risks and dependencies. Raising the above according to the agreed issues and risk management procedures.

Responsible for ensuring the specified testing entry and exit criteria are met for ST, E2E, UAT, TECH.

Main escalation point between testing and other teams i.e. business, Development,

Test Phase Team Lead Provide input into the Test Strategy

Responsible for providing input into estimating, planning and ensuring appropriate level of resourcing for the test phases.

Create, maintain and ensure sign-off of the Test Plans.

Lead the testing effort including: Delivery of the test cases/scripts, data and results.

Ensure test artefacts delivered are stored correctly in Test Director or Excel

Defect Management relevant to responsible test phase.

Manage test preparation and execution risks, and issues.

Create, maintain and ensure sign-off of the Test

Page 23 of 25

Page 24: Test Strategy Template

Testing Role Responsibility/Accountable

Summary Reports.

Test Analysts (TA) Provide input into the Test Plans.

Undertake testing activities ensuring these meet agreed specifications.

Create, maintain and execute the test cases in Test Director or Excel.

Devise, create and maintain test data.

Analyse and store test results in Test Director or Excel.

Raise, maintain and retest defects in Test Director or Excel)

Provide input into the Test Summary Reports.

Technical Test Analysts (TTA)

Provide input into the Technical Test Plans.

Undertake technical testing activities ensuring these meet agreed specifications.

Create, maintain and execute the test cases in Test Director or Excel.

Devise, create and maintain test data.

Analyse and store test results in Test Director or Excel.

Raise, maintain and retest defects in Test Director or Excel.

Provide input into the Test Summary Reports.

Business Analysts (BA) Provide business input into the Test Plans, test cases and test data.

Execute test cases stored in Test Director or Excel.

Analyse and store test results in Test Director or Excel.

Raise, maintain and retest defects in Test Director or Excel.

Provide input into the Test Summary Reports i.e. business workarounds and impact assessment.

Technical Lead (Team) Provide solution details to Test Analysts

Review detailed test plans produced by Test Analysts

Input into and review test cases produced by Test Analysts

Review and categories/priorities test results

Validate, raise and progress defects to resolution

Vendors Input into the test cases

Review and sign-off the DTP and test cases/scripts

Review of Test results

Ownership of defects associated with the vendor solution

Page 24 of 25

Page 25: Test Strategy Template

Testing Role Responsibility/Accountable

Responsibility for issue resolution if associated with the vendor product/solution

Assist in testing and defect reproduction for de-bug information purposes

Global Operations (GO) Deliver OAT Detailed Test Plan

Delivery and reporting of OAT testing results and progress

Management of the OAT environments

Execute OAT tests

Validate, raise and progress defects to resolution

Sign-off OAT

Business User from THE CLIENT Legal

Input into the development of the User Acceptance test scripts

Review and sign off User Acceptance Detailed Test Plans

Review and sign off User Acceptance test requirements and scripts

Agree acceptance criteria based on the successful completion of test execution

Perform User Acceptance Testing

Sign-off UAT

Page 25 of 25