Test Maturity Model - Whitepaper

download Test Maturity Model - Whitepaper

of 20

Transcript of Test Maturity Model - Whitepaper

  • 8/12/2019 Test Maturity Model - Whitepaper

    1/20

    17 Dorset Square, London, NW1 6QB

    T: +44 (0)207 871 2300 E: [email protected] www.experimentus.com

    Test Maturity Model integrated (TMMi)

    Survey results

    How Mature Are CompaniesSoftware Quality Management Processes InTodays Market?

    2011 update

    Listen | Challenge | Understand | Interpret | Create

  • 8/12/2019 Test Maturity Model - Whitepaper

    2/20

    TMMi Survey Results - White Paper - 2011 update v1.0 2 Copyright 2011 Experimentus Ltd

  • 8/12/2019 Test Maturity Model - Whitepaper

    3/20

    TMMi Survey Results - White Paper - 2011 update v1.0 3 Copyright 2011 Experimentus Ltd

    Table of Contents

    Executive Summary .......................................................................................................................... 4Notable Results ............................................................................................................................. 4Areas for Concern ......................................................................................................................... 4Conclusion ..................................................................................................................................... 5

    Background ....................................................................................................................................... 6About TMMi ................................................................................................................................... 6The Survey .................................................................................................................................... 7

    The Practice areas of TMMi Level 2 in detail ..................................................................................... 9Overall rating ................................................................................................................................. 9Test Policy and Strategy .............................................................................................................. 10Test Design and Execution .......................................................................................................... 11Test Planning .............................................................................................................................. 13Test Environment ........................................................................................................................ 14Test Monitoring and Control ......................................................................................................... 15Conclusion ................................................................................................................................... 16

    Industry Overview ........................................................................................................................... 17Appendix 1 ...................................................................................................................................... 18

    Background to the TMMi Model and the TMMi Foundation .......................................................... 18

    Copyright 2011 Experimentus Ltd. This document and any information herein are confidential and copyright property of ExperimentusLtd and without infringement neither the whole nor any extract may be disclosed, loaned, copied or used for manufacturing, provision ofservices or other purposes whatsoever without prior written consent. No liability is accepted for loss or damages from any causewhatsoever from the use of the document. Experimentus Ltd retain the right to alter the document at any time unless a written statementto the contrary has been appended.

  • 8/12/2019 Test Maturity Model - Whitepaper

    4/20

    TMMi Survey Results - White Paper - 2011 update v1.0 4 Copyright 2011 Experimentus Ltd

    Executive SummaryThere is very little data that exists on the maturity of testing practices throughout the software

    quality life cycle in the Industry and lots of people refer to the maturity with a level of certainty not

    backed up by any real data. In 2009 Experimentus established a survey for this very purpose.

    Two years on from that first survey, data is still being collected. This document reflects the up to

    date view of organisation test maturity based upon the industry standard Test Maturity Model

    integrated (TMMi)see www.tmmifoundation.org.

    Of the 100 plus companies, across 9 industry sectors, who responded:

    In 2009 72.5% of the respondents were at TMMi Level 1 heading for Level 2,

    meaning they are working in a chaotic, hero-based way but starting to build project-

    based processes. However, the 2010 survey sees that change in the right direction;

    now only 63% are at TMMi Level 1 heading for Level 2, showing a reduction of13%.

    In 2009 27.5% were at TMMi Level 2 heading towards Level 3, meaning they have

    some established project based process and are moving towards implementing

    process at an organisational level. For the 2010 survey we have seen a growth in

    the average maturity of test process to 37% now achieving Level 2 and heading for

    Level 3, showing an overall improvement of 35%.

    As in our 2009 survey, unfortunately none of the 2010 respondents reached Level 3although many have improved what they do and are therefore moving closer to

    achieving this.

    0% 20% 40% 60% 80%

    TMMi Level 1

    TMMi Level 2

    72.5%

    27.5%

    63.0%

    37.0%

    2010/2009 Maturity accross respondents

    2010

    2009

    Notable Results

    The results suggest software testers are still consistent at designing and planning testing, butthey continue to be not so good at setting goals, monitoring tests and managing the plans.

    Combine this with not being consistent in how testing is estimated and it should come as no

    surprise that testing is still seen as a bottleneck to delivery.

    Meanwhile, test environments are well planned and reflect production-like test environments

    fairly consistently for the later stages of testing (e.g. acceptance testing). While this will always be

    a difficult area to manage, the importance is clearly recognised.

    Areas for ConcernIt is still surprising that metrics continues to be a major weakness in the management of software

    quality and testing. The marked drop in the numbers who have metrics in place has now shifted to

    organisations now revisiting this (as a result of not getting the right metrics in the first place or thedata was not consistent in its meaning). Thus affecting the measurements and therefore the

    For the 2010

    survey we have

    seen a growth in

    the average

    maturity of testprocess to 37%

    now achieving

    Level 2 and

    heading for Level

    3, showing an

    overall

    improvement of

    35%

  • 8/12/2019 Test Maturity Model - Whitepaper

    5/20

    TMMi Survey Results - White Paper - 2011 update v1.0 5 Copyright 2011 Experimentus Ltd

    decisions which can be made as a result. This leaves many organisations without the means to

    manage properly, control and improve what is undoubtedly a large proportion of the project costs.

    ConclusionIt is not as though testing and software quality management is a new discipline in the software

    development lifecycle, where appropriate good practices not being developed to cope with new

    processes. Far from it - It is perhaps a damning indictment of the industry that after all these years

    we can consistently design and plan testing, but have no thought or regard for effectively

    measuring the success and efficiency of this activity (which, combined with the costs of rework,

    forms a significant proportion of project costs).

    The ambition of testing to be recognised as a profession will never be realised unless there is

    measurement in place to manage and report on activities to assist in creating roadmaps for self-

    improvement and the delivery of better quality software on time and within budget.

    Management also have a considerable amount of hidden costs in the development lifecycle

    which, without the right metrics, they cannot hope to understand or control. It does raise the

    question as to why any organisation would want to run without consistent, meaningful andactionable metrics, especially when this can lead to self-improvement.

    Overall, the trend towards better management of software quality and testing is improving and

    over time will dramatically reduce what are currently hidden costs. The questions still to be

    answered are:

    How quickly will the industry adopt appropriate good practices?

    and

    With the current state of immaturity, where can you turn to get good appropriate

    practices?

    Until recently the TMMi model had covered Maturity Levels 2 and 3 (Managedand Defined) and

    with the release of Levels 4 and 5 covering Management and Measurementand Optimisation,

    the TMMi model provides a valuable source for appropriate good practices. The use of the model

    has already been established by both service providers and clients to great success, with

    dedicated testing practices and even government departments beginning to benefit from its use

    and in some cases looking to be certified as a way of differentiating their capability from that of

    their competitors.

    Overall the trend

    towards better

    management of

    software quality

    and testing is

    improving and

    over time will

    dramatically

    reduce what are

    currently hidden

    costs

  • 8/12/2019 Test Maturity Model - Whitepaper

    6/20

    TMMi Survey Results - White Paper - 2011 update v1.0 6 Copyright 2011 Experimentus Ltd

    BackgroundIn December 2010, the TMMi Foundation published the full TMMi model from Level 1 to 5, with

    the requirements for organisations and individuals to have their assessment method for the TMMi

    Model reviewed and accredited. At the same time they provided guidance on what is required toattain Accredited Assessor status.

    This release was a significant achievement for the TMMi Foundation, marking the beginning of an

    industry-wide roadmap for implementing software quality management into the application

    development lifecycle. Their roots go back to 2004 when a small group of quality process

    improvement enthusiasts from around Europe met for the first time and decided it would make

    sense to develop and support a single, non-commercial test improvement model. Since then,

    there have been a growing number of supporters who acknowledge the positive difference the

    TMMi model makes to the delivery of increased quality and reduced costs.

    In September 2008, Experimentus became the first company to have an accredited assessment

    method, accredited Assessors and Lead Assessors in the UK. After receiving its accreditation,

    Experimentus conducted a survey to understand the maturity of the software quality processesacross the IT industry. Over 100 respondents, from many different industries, completed the

    survey. It was decided that as the survey had been well received and as there is little other data

    available about the maturity of the software testing industry that the survey would be updated.

    This paper details the results of the 2010 survey and reflects the changes that have been taking

    place within software quality management and test practices of organisations over the last two

    years.

    About TMMiIn the same way as the CMMI (Capability Maturity Model Integrated) process model is split over 5

    Levels, the following diagram depicts the 5 Levels of the TMMi model.

    Chaotic process

    Dependent on heroes

    No understanding of the cost of quality

    LEVEL 1: INITIAL

    Test policy and strategy

    Test planning

    Test monitoring and control

    Test design and execution

    Test environment

    LEVEL 2: MANAGED

    Test organisation

    Test training programme

    Test life cycle and integration

    Non-functional testing

    Peer reviews

    LEVEL 3: DEFINED

    Test measurement

    Software quality evaluation

    Advanced peer reviews

    LEVEL 4: MANAGEMENT & MEASUREMENT

    Defect prevention

    Test process optimisation

    Quality control

    LEVEL 5: OPTIMISATION

  • 8/12/2019 Test Maturity Model - Whitepaper

    7/20

    TMMi Survey Results - White Paper - 2011 update v1.0 7 Copyright 2011 Experimentus Ltd

    Each maturity level of the model is made up of a series of components (see diagram below).

    At the top we have the Maturity Level which indicates an organisation s, projectsor teams level

    of maturity. The Maturity Level is made up of a series of Process Areas (such as Test Policy and

    Strategy). These are the areas in which goals need to be achieved to verify a Maturity Level hasbeen reached.

    Each Process Area contains Generic and Specific Goals and Specific Practices, which in turnprovide the details required to implement the Process Area. So, for example, under the Test

    Policy and StrategyProcess Area there are the following supporting Practices:

    Specific Goal - Establish a test policy

    Specific Practice 1.1 Define test goals

    Specific Practice 1.2 Define the test policy

    Specific Practice 1.3 Distribute the test policy to stakeholders

    Each Process Area is looked at in terms not only of its existence but also its deployment and the

    effectiveness of that deployment.

    Please refer to Appendix 1 for more background to the TMMi model.

    The SurveyThe survey was designed to align major testing activities with Process Areas analysed in a TMMi

    assessment, with the purpose of providing a survey indicating the alignment of current testing

    practices to this industry standard. The initial survey took place during the last quarter of 2008

    and closed in February 2009 and the second survey was closed in November 2010, with each

    respondent asked to review a series of statements and to answer the statement with one of

    following responses:

    Strongly agreethat the process in question is in place and well established

  • 8/12/2019 Test Maturity Model - Whitepaper

    8/20

  • 8/12/2019 Test Maturity Model - Whitepaper

    9/20

    TMMi Survey Results - White Paper - 2011 update v1.0 9 Copyright 2011 Experimentus Ltd

    The Practice areas of TMMi Level 2 in detail

    Overall ratingThe following chart reflects a 35% increase in Level 2 maturity for all respondents since the

    survey results published in March 2009. However no one has yet reached Level 3.

    0% 20% 40% 60% 80%

    TMMi Level 1

    TMMi Level 2

    72.5%

    27.5%

    63.0%

    37.0%

    2010/2009 Maturity accross respondents

    2010

    2009

    What does being in TMMi Level 1 mean?

    TMMi Level 1 reflects that there is little or no standardisation of process. The delivery of good

    testing and software quality management depends on people who know what they are doing (they

    may all be doing something different and not able to understand each others approach) and the

    classic hero culture of the 24 hour test team who are indispensable. The problems arise when a

    system matter experts knowledge is not accessible by others or they are no longer available. The

    impact to the business can be great, resulting not only in delays, but increased risks and costs,

    many examples of which have been well publicised.

    What does TMMi Level 2 mean?

    TMMi Level 2 reflects that for our 2010 survey 37% of the surveyed respondents have some

    processes in place, in all of the five Process Areas on which this level focuses. These are:

    1. Test Policy and Strategy

    2. Test Design and Execution

    3. Test Planning

    4. Test Environment

    5. Test Monitoring and Control

    The small percentage of people with Level 2 is surprising, considering the number of industry

    sectors in the survey who have mission-critical applications.

    Looking at the results across

    the 9 industry sectors who

    responded, we saw a swing in

    respondents from the 2009

    survey to more IT support-

    driven respondents, which we

    believe is related to the growth

    in test outsourcing and the

    subsequent need for IT

    support services to ensure

    they are as mature as possibleto remain efficient and effective

    Industry Sectors

    TMMi Level 2

    TMMi Level 1

  • 8/12/2019 Test Maturity Model - Whitepaper

    10/20

    TMMi Survey Results - White Paper - 2011 update v1.0 10 Copyright 2011 Experimentus Ltd

    for their clients. For this survey the finance respondents, although at a lower volume than the last

    survey, do seem to be showing growing maturity in their test process.

    Test Policy and StrategyThis Process Area looks at how well established is the organisational view of software testing,

    specifically looking at the two key organisational test documents, test policy and test strategy and

    how they are deployed and understood.

    A test policy has been established and agreed by the stakeholders and is aligned to

    business quality policies.

    A test policy is the

    executive sign-off that

    confirms the goals and

    objectives of testing in

    an organisation and

    ensures that the

    objectives of the testing

    area is aligned with the

    needs of the business

    and clearly understood

    and agreed by all. So if

    getting to market quickly

    is the business

    objective, then testing

    would be directed to

    enable speed to market. Matching test and business objectives makes sense in order to better

    serve the business.

    Survey results explained: 2010 sees an increase in those working with an established test

    policy from 25% in 2009 to 29%. However, 71% are working without any commitment from

    executive management that what they are doing is aligned to the needs of the business.

    Experimentus view:It is key that executive sponsorship for testing is obtained if it is to be taken

    seriously, hence the need for a test policy. It is good news that more companies do generate test

    policies but it is still very disappointing that 43% have a policy that has not been deployed, only

    1% less than 2009. Working under a clear policy to deliver better quality outcomes combined with

    the resultant efficiency savings is now seen to be gaining some momentum. Typical reasons for

    not deploying a test policy include: the value of having a test policy not being clearly understood

    by senior management; the test community having not clearly articulated the benefits to

    management; that policies are generated but perhaps fall into misuse because they are poorly

    written, understood and communicated. It seems pointless to put the effort into obtainingexecutive authorisation and then not using that in a more open way.

    Effective policies sponsored by the business typically deliver efficiency and productivity savings.

    An organisation-wide or programme-wide test strategy has been established and

    deployed.

    The role of a test strategy is to define the detail of how testing will be implemented, either for an

    organisation, programme of work or project. At the organisational level it may be more of a

    framework of process and controls from which projects can select, dependent on risk and

    development approach.

    0% 10% 20% 30% 40% 50%

    no process exists

    embryonic stage

    exists but is not deployed

    successfully

    well established

    15%

    13%

    44%

    25%

    12%

    14%

    43%

    29%

    A test policy has been established

    2010

    2009

    It is key that

    executive

    sponsorship for

    testing is

    obtained if it is to

    be taken

    seriously

  • 8/12/2019 Test Maturity Model - Whitepaper

    11/20

    TMMi Survey Results - White Paper - 2011 update v1.0 11 Copyright 2011 Experimentus Ltd

    At the programme and

    project level it is used to

    specify how testing will be

    carried out, thus ensuring

    that at all levels of the

    organisation, a common,and more importantly, a

    reusable process is

    deployed. This ensures

    that each tester is able to

    work in a consistent,

    repeatable manner, with

    any process improvements

    made being reflected

    throughout the targeted

    area.

    Survey Results explained: Disappointingly, the results for fully deployed test strategies have

    dropped to 29% for this survey from 33% in 2009. In contrast the amount of organisations that

    have a test strategy but have yet to fully deploy it has gone up from 35% to 39% - perhaps

    indicating that some existing test strategies are falling into non-use or are in the process of being

    revised.

    Experimentus view: With less than a third of respondents operating with an effective test

    strategy, as we highlighted in 2009 it raises the question, how effective is the testing, and what

    value is it adding to the organisation? These organisations are running tests, but are they focused

    on the objectives of the project, or merely what the tester believes should be done? Working

    without a test strategy is like using a map in the dark; you might get to where you want, but you

    will probably get lost on the way, take a lot longer and spend a lot more time and money than you

    planned!

    These results are surprising because most testers will say they work to a strategy, but this does

    not seem to be qualified by the results. For those who do, it is not clear where they get their

    focus, if they do not have any test policy.

    Test Design and ExecutionThis Process Area looks at the approach to the design of test, how well tests are executed

    against the software under test and how well defects are managed and tracked to a satisfactory

    solution.

    Test scripts are developed, documented and prioritised.

    Test scripts describe the

    tests that will be run to

    prove that the software

    under test meets its

    quality targets. Prioritising

    and documenting these

    scripts ensures that the

    risks at release are lower,

    while the quality of the

    delivery is high.

    Result:The very positive

    news is that the volume of organisations that document and prioritise their test scripts has grownfrom 53% to 57%, with no one saying that they do not do any prioritisation or documentation.

    With less than a

    third of

    respondents

    operating with an

    effective Test

    Strategy it raises

    the question,

    how effective is

    the testing, and

    what value is it

    adding to theorganisation?

    We still see the

    production of test

    scripts as more

    important than

    having a test

    strategy or

    policy, so no

    goals, just tests!

    0% 10% 20% 30% 40%

    no process exists

    embryonic stage

    exists but is not deployed

    successfully

    well established

    17%

    13%

    35%

    33%

    16%

    14%

    39%

    29%

    Organisation/programme-wide test strategy

    has been established and deployed

    2010

    2009

  • 8/12/2019 Test Maturity Model - Whitepaper

    12/20

    TMMi Survey Results - White Paper - 2011 update v1.0 12 Copyright 2011 Experimentus Ltd

    Experimentus view: It is still very surprising that as an industry we still see the production of test

    scripts as more important than having a test strategy or policy (so, no goals, just tests). It is not all

    negative as the results for this statement are better overall, with no one suggesting that they do

    not document or prioritise test scripts being a big step forward. There is no excuse not to

    document tests. All development methods expect documented test scripts whether structured,

    iterative or agile (albeit some may be produced to record the test that took place rather than asthe test to be executedexploratory testing for example).

    A Test Approach based on identified risks is established and agreed upon.

    The test approach sits

    below the test strategy in

    the test documentation

    hierarchy (or can form

    part of the test strategy)

    and confirms the detail of

    how testing will actually

    be implemented for a

    project or program e.g.

    templates as well as

    process. This enables a

    tactical approach to

    dealing with risks.

    Result: A small increase from 2009 from 35% to 37% of respondents actively using a risk

    process when defining their test approach.

    A more positive result is seen if we look at those in the strongly disagree category (who do not

    use risk at all) with a large decrease from 13% to 4%.

    Experimentus view:With risk assessment forming an important cornerstone to testing and with

    the amount of conferences focussed on risk-based testing, it is encouraging that there is an

    increased awareness and use of risk-based testing, but surprising that this statement did not get

    significantly better results. Our experience reflects these results in that many organisations

    struggle to get a repeatable risk process established.

    Incidents found during testing are reported using an incident classification scheme and

    managed using a documented procedure.

    Incident classification

    relates to the process

    by which the priority

    and severity of defects

    is defined. The incident

    process governs how

    defects progress from

    identification through to

    classification. This

    enables strategies to be

    developed to reduce

    the amount of incidents

    and thereby costs and

    time of re-work.

    Result:This result has

    remained constant between surveys at 73%. However, those who have now got a proceduresignificantly increase from 10% to 24%.

    0% 10% 20% 30% 40%

    no process exists

    embryonic stage

    exists but is not deployed

    successfully

    well established

    13%

    13%

    38%

    35%

    4%

    12%

    39%

    37%

    A test approach, based on identified risks

    agreed

    2010

    2009

    0% 20% 40% 60% 80%

    no process exists

    embryonic stage

    exists but is not deployed

    successfully

    well established

    8%

    6%

    10%

    73%

    4%

    0%

    24%

    73%

    Incident classification scheme and process

    implemented

    2010

    2009

    It is encouraging

    that there is an

    increased

    awareness and

    use of risk based

    testing

  • 8/12/2019 Test Maturity Model - Whitepaper

    13/20

    TMMi Survey Results - White Paper - 2011 update v1.0 13 Copyright 2011 Experimentus Ltd

    The Experimentus view: Incident management continues to show it is the most intuitive process

    within the testing lifecycle. It is the one thing that the test industry can be proud it does well! This

    is not surprising but we cannot say whether the incidents are consistently classified, meaningful

    and actionable.

    Test PlanningThis Process Area looks at the processes of the test plan development including estimating.

    A Test Plan is established and maintained as the basis for managing testing and

    communication to stakeholders.

    A test plan is the

    document that defines

    what the day to day

    activity on a testing

    project is. To be useful it

    needs to be updated as

    the project/programme

    changes. It plays a large

    part in ensuring that

    testing starts and

    completes on time and

    that risks and any

    delays are identified

    early.

    Result: Another maturity increase from 54% to 57% who are now documenting their plans and

    maintaining the plan once agreed.

    The Experimentus view:Although the numbers have improved from 2009 there are still many

    test projects working without a plan, this is not good. Not having a plan is like getting lost andtrying to get home with a blank sheet of paper; you do not know where you are or when you will

    arrive at your destination!

    An approach to test estimation based on a repeatable process and/or historical data is in

    place.

    A mature organisation

    bases its estimation on

    the time, people and

    environments required

    to deliver the test

    project on a common

    repeatable process.

    This ensures that as

    actual post release data

    becomes known, the

    estimating data can be

    refined to provide more

    and more accurate

    estimates in the future.

    Result:A significant increase on the last survey with a massive increase from 17% in 2009 to

    39% in 2010 of those who are now using a repeatable estimation process.

    The Experimentus view: This is the best result of the 2010 survey and indicates a big step

    forward for software test planning, and a better understanding of the value of a repeatableprocess for estimating. A good, well-understood, repeatable estimation process ensures that

    0% 10% 20% 30% 40% 50% 60%

    no process exists

    embryonic stage

    exists but is not deployed

    successfully

    well established

    8%

    8%

    27%

    54%

    0%

    8%

    35%

    57%

    A test plan is established and maintained

    2010

    2009

    0% 10% 20% 30% 40%

    no process exists

    embryonic stage

    exists but is not deployedsuccessfully

    well established

    13%

    33%

    33%

    17%

    10%

    22%

    27%

    39%

    Test estimation based on a repeatable process is

    used

    2010

    2009

    A massive

    increase from

    17% in 2009, to

    33% in 2010 whoare now using a

    repeatable

    estimation

    process

    Incident

    Managementcontinues to

    show it is the

    most intuitive

    process in the

    testing lifecycle.

    It is the one thing

    that the test

    industry can be

    proud it does well

  • 8/12/2019 Test Maturity Model - Whitepaper

    14/20

    TMMi Survey Results - White Paper - 2011 update v1.0 14 Copyright 2011 Experimentus Ltd

    software testing planning will be better validated and have a significantly better chance of

    obtaining the budget it requires.

    Test EnvironmentThis Process Area looks at how well-planned and implemented are the test environments.

    Test environments are specified early and their availability is ensured on time in projects.

    Understanding the test

    environment requirements

    early enough and in

    sufficient detail, enables

    costs to be built in early to

    a project plan, thus

    providing early visibility to

    the business. The details

    of any new environments

    should be provided by the

    technical architects with

    the test team deciding

    when they are needed

    and how they are to be

    delivered (phased or all

    together). Over time, this will ensure efficiency and accuracy of test environments with associated

    costs which have been planned for.

    Result: The number of respondents who specify their test environments early has dropped from

    40% in 2009 to 37% in 2010, but the number with a process not yet deployed for this area has

    gone up by nearly 10%, which is a positive sign.

    The Experimentus view:Test environments are an area often blamed for delays in testing due topoor construction or generally not getting what is requested. The drop in the number planned

    early and available and working on time is perhaps a signal of the fact that as testers we need to

    recognise - that sometimes we are not as effective as we should be at defining the environments

    we require and getting them booked and implemented in time!

    For later test stages (e.g. system or acceptance testing), the test environment is as real

    life as possible.

    For the later stages in

    testing, to ensure that the

    software will work in

    production, it is crucial

    that the test environment

    replicates the production

    environment as far as

    software/hardware and

    configuration is

    concerned. It may be

    smaller than a production

    environment in terms of

    data, but not so small that

    it will not react in the

    same way. Increased

    risks in deployment are mitigated by understanding the relationship between this and the live

    environment.

    0% 10% 20% 30% 40% 50%

    no process exists

    embryonic stage

    exists but is not deployed

    successfully

    well established

    8%

    17%

    33%

    40%

    6%

    16%

    41%

    37%

    Test environments are specified early and their

    availability is ensured

    2010

    2009

    0% 10% 20% 30% 40% 50% 60%

    no process exists

    embryonic stage

    exists but is not deployed

    successfully

    well established

    4%

    13%

    25%

    56%

    4%

    10%

    29%

    55%

    For later test stages the test environment is as

    real life as possible

    2010

    2009

    Test

    Environments is

    an area often

    blamed fordelays in testing

  • 8/12/2019 Test Maturity Model - Whitepaper

    15/20

    TMMi Survey Results - White Paper - 2011 update v1.0 15 Copyright 2011 Experimentus Ltd

    Result:Very little change from the 2009 results, with 55% of test environments reflecting real life.

    The Experimentus view: It continues to be a good sign that the system and user acceptance

    test phases do use production like environments. Good environments and data ensure that the

    outcome of the test execution, positive or negative, can be believed. This will continue to be a

    challenge for organisations with complex systems and is perhaps reflected in the increases seenin more test approaches which are based on risk.

    Test Monitoring and ControlThis Process Area is focussed on the collection and use of metrics to report progress and identify

    possible improvements to the process.

    A set of goal oriented test process performance indicators have been established and

    deployed.

    For testing to measure

    whether or not it is effective,

    it needs to set itself some

    performance indicators such

    as achieving a certain level

    of Defect Detection

    Percentage (DDP). You

    cannot manage what you do

    not measure.

    Result: A significant drop

    from 2009s results, from

    30% to 14%. With a

    significant increase in those

    that have a set of

    performance indicators buthave not yet implemented them fully.

    The Experimentus view: In our opinion this is not only the worst result of the entire survey, but

    the industry seems to have moved backwards in its use of metrics. The increase in those

    organisations that have indicators, but have not implemented them fully, may be a refection of the

    goals originally set up not providing the right level of meaningful and actionable measurement.

    Organisations may therefore be revisiting their performance indicators, which is seen in the

    increase from 25% to 49%. Interestingly there is a decrease of around 50% in organisations that

    previously did not have test goals, reflecting an increased take up of performance indicators by

    the more immature organisations.

    Progress of testing is monitored against the Test Plan.

    We saw in Test

    Planning that it is

    important to keep the

    plan up to date. It is

    equally important to

    measure actual activity

    against planned activity.

    Any slippage can then be

    identified and managed

    before it creates any real

    issues.

    Result: A significantincrease from 42% in

    2009 to 59% in 2010 of

    0% 10% 20% 30% 40% 50%

    Dont have test goals

    Dont think we have test goals

    Might have test goals

    Have test goals

    25%

    17%

    25%

    29%

    14%

    22%

    49%

    14%

    A set of goal oriented test process performance

    indicators has been established and deployed

    2010

    2009

    0% 10% 20% 30% 40% 50% 60%

    no process exists

    embryonic stage

    exists but is not deployed

    successfully

    well established

    8%

    13%

    35%

    42%

    4%

    8%

    29%

    59%

    Progress of testing is monitored against the Test

    Plan

    2010

    2009

    In our opinion,

    this is not only

    the worst result

    of the entire

    survey, but as an

    industry we seem

    to have moved

    backwards in our

    use of metrics

  • 8/12/2019 Test Maturity Model - Whitepaper

    16/20

    TMMi Survey Results - White Paper - 2011 update v1.0 16 Copyright 2011 Experimentus Ltd

    those who do track progress against the agreed plan.

    The Experimentus view: It is very encouraging to see test organisations getting better at

    progress monitoring. It is still a long way from good, but it is improving. A project that does not

    monitor its progress has no clue where it is or when it will finish!

    ConclusionThe survey results show that we are improving as an industry with the overall maturity of testing

    increasing over the two year period since the last survey. The signs are that this will continue as

    more and more companies realise the benefits that improvements to software quality

    management and test processes can bring.

    The signs from the survey are encouraging from those organisations that were previously the

    least mature, making positive improvements to the way they work, with more mature

    organisations looking to and using testing efficiency as a way of meeting current budget restraint.

    There seems to be a more focussed, professional attitude which needs data to substantiate

    current activities and to measure improvements made.

    In the last 2 years a lot of economic and political change has taken place that is driving down the

    budget available for IT. This has focus, and will continue to focus, everyone on being moreefficient or effective at what they do. There is of course the alternative where process and

    methods just get ignored in the haste to save money. This as will be obvious to anyone reading

    this, leads to inflated costs, at less quality delivered than expected and often delivered late - all

    those things which the business regularly bring up as areas for improvement.

    The improvement in maturity overall is heading in the right direction and it is expected that as

    software quality and testing come under increasing pressure in the coming year that the rate of

    increase will be greater year on year to reflect both economic and competitive advantage.

  • 8/12/2019 Test Maturity Model - Whitepaper

    17/20

    TMMi Survey Results - White Paper - 2011 update v1.0 17 Copyright 2011 Experimentus Ltd

    Industry OverviewThe challenge we see for the industry is to increase the rate of maturity of software quality

    management and test processes, to enable testing to be clearly labelled as a profession within

    the application life cycle. The ambition of testing to be recognised as a profession will never berealised unless there is measurement in place to manage and report on activities which assist in

    creating roadmaps for self improvement and the delivery of better quality software on time and in

    budget.

    Management also have a considerable amount of hidden costs in the development lifecycle

    which, without metrics, they cannot hope to understand or control. It does raise the question as to

    why any organisation would want to run without consistent, meaningful and actionable metrics.

    Particularly now that the TMMi Foundation has completed the development of the full TMMi model

    by adding Maturity Levels 4 and 5 covering Management and Measurementand Optimisation,

    there is no reason why the rate of improvement of maturity cannot be increased further. The

    spread of appropriate good practices into software quality management and test processes can

    only be a good thing, raising the professionalism of testing and managements capability tomanage.

    We certainly expect the growth in the maturity of testing to continue. We propose that in order to

    maximise the benefits of improvements, the industry must focus in the areas of goal setting and

    metrics/reporting as an established practice, to demonstrate to the business the value provided

    and progress made, as well as delivering input into determining the roadmap for further

    improvements to meet the challenges in the coming years. The short-term implication of this will

    be to bring into sharp focus the wasted cost and effort which exist hidden in the development

    lifecycle.

    Experimentus has seen a number of trends in the industry which reflect an increase in the desire

    to improve the way we manage software quality and testing:

    System Integrators and specialist test companies are under increasing pressure from

    their clients and prospects to demonstrate their real commitment to delivering and

    improving quality. We are now increasingly being asked to certify their processes

    against good appropriate practices in order for them to provide competitive and

    demonstrable differentiation

    Organisations who are still suffering from the same problems they have had for some

    years, now are under increasing pressure from the business to solve those problems so

    that they can better compete

    The move from would like to to must regarding increasing efficiency and savings

    while maintain or enhancing the quality of deliverables

    For International organisations, test and quality management capability assessments

    are on the increase to provide frameworks for the better management and delivery of

    quality and identify hidden costs

    Now that the full TMMi model has been published and with the anticipated increase in the number

    of accredited assessors, there is every reason and opportunity for organisations to achieve and

    increase their rate of maturity, improve quality and eliminate those hidden costs.

    It is also interesting to note that the number of people joining the TMMi Foundation have

    increased dramatically with the model gradually being adopted by many organisations.

    The spread of

    appropriate good

    practices into

    software quality

    management and

    test processes

    can only be a

    good thing

  • 8/12/2019 Test Maturity Model - Whitepaper

    18/20

    TMMi Survey Results - White Paper - 2011 update v1.0 18 Copyright 2011 Experimentus Ltd

    Appendix 1

    Background to the TMMi Model and the TMMi Foundation

    The Test Maturity Model integrated (TMMi) has been developed to complement the existing CMMI (Capability MaturityModel Integrated) framework and to specifically deal in detail with software quality.

    It provides a structured presentation of maturity levels, allowing for standard TMMi assessments and certification, enabling

    a consistent deployment of the standards and the collection of industry metrics.

    TMMi has a rapidly growing uptake across Europe, Asia and the USA and owes its popularity to being the only

    independent test process measurement method.

    The independent TMMi Foundation initiative (www.tmmifoundation.org) has been established with the sole intent of

    developing the TMMi standard.

    The TMMi Foundation provides:

    A standard, staged TMMi model that can be used in isolation or in support of other process improvement models.

    TMMi Assessment Method Application Requirements (TAMAR) for TMMi in accordance with ISO15504 and the

    process to certify commercial assessment methods against the standard model.

    Certification and training/examination process, procedures and standards for formal, public accreditation of

    Assessors and Lead Assessors and the on-going management.

    An independently managed data repository to support TMMi assessment method accreditation, assessor and

    assessment certification/validation and validated assessment data and certificates.

    http://www.tmmifoundation.org/http://www.tmmifoundation.org/
  • 8/12/2019 Test Maturity Model - Whitepaper

    19/20

    TMMi Survey Results - White Paper - 2011 update v1.0 19 Copyright 2011 Experimentus Ltd

  • 8/12/2019 Test Maturity Model - Whitepaper

    20/20

    LISTEN

    We actively listen to our clients enterprise

    computing issues and challenges, giving careful

    consideration to their individual requirements.

    CHALLENGE

    Focussed on consistently improving operational

    efficiencies and providing confidence to make

    risk-management decisions, we pioneer new

    ways to challenge our clients thinking.

    UNDERSTAND

    Our proven expertise and broad understanding is

    based on over 25 years of consultancy, analysis

    and testing of large enterprise applications.

    INTERPRET

    Our unique analytical approach and

    commitment ensures that our clients

    requirements are translated into first class

    deliverables with measurable results

    and productivity gains.

    CREATE

    We create superior quality solutions that

    deliver not only innovation, but help our

    clients to reduce cost, mitigate risk

    and enhance revenue.

    Experimentus Ltd

    17a Dorset Square

    London

    NW1 6QB

    Tel:+44 (0)207 871 2300

    Web:[email protected]

    www.experimentus.com

    mailto:[email protected]:[email protected]