Benchmark METRICS THAT MATTER October 4 2012

Post on 10-May-2015

780 views 1 download

Tags:

description

Betty Schaar and Jeff Roth presented this at BenchmarkQA's fall 2012 Software Quality Forum, challenging attendees to rethink the metrics they're generating. Metrics without the context of the project mean nothing.

Transcript of Benchmark METRICS THAT MATTER October 4 2012

METRICS THAT MATTER Challenge Your Current Thinking!

(a.k.a. Selecting Valuable Metrics Instead of Vanity Metrics)

10/5/2012 ©2012 BenchmarkQA, Inc. All rights reserved. 1

Presented by: Betty Schaar & Jeff Roth

Th

e S

of

tw

ar

e Q

ua

lit

y E

xp

er

ts

METRICS THAT MATTER

10/5/2012 ©2012 BenchmarkQA, Inc. All rights reserved.

2

“The only metrics that entrepreneurs should invest energy in

collecting are those that help them make decisions. Unfortunately,

the majority of data available in off-the-shelf analytics packages

are what I call ‘Vanity Metrics.’ They might make you feel good,

but they don’t offer clear guidance for what to do.”

Eric Ries

Th

e S

of

tw

ar

e Q

ua

lit

y E

xp

er

ts

MEASUREMENT VS. METRIC

Measurement - The value of a dimension, quantity, or capacity obtained by collecting project data.

10/5/2012 ©2012 BenchmarkQA, Inc. All rights reserved.

3

A metric or group of metrics can be used to make decisions and manage a project.

A single measurement does not provide support for decision-making.

Metric - A comparison, ratio, or plot of a series of measurements or an algorithm of two or more measurements.

Th

e S

of

tw

ar

e Q

ua

lit

y E

xp

er

ts

ISSUES WITH METRICS

vs. Valuable

> Relative vs. Absolute

> Not enough context

> Systems don’t support data collection

> Not enough historical data to be valuable

> Fear causes “skewage” of data

> Outliers cause “skewage” of data

> The numbers can lie! (or can at least be manipulated)

10/5/2012 ©2012 BenchmarkQA, Inc. All rights reserved.

4

DETERMINING METRICS CONTEXT

10/5/2012 ©2012 BenchmarkQA, Inc. All rights reserved.

5

Th

e S

of

tw

ar

e Q

ua

lit

y E

xp

er

ts

CONTEXT TYPES

10/5/2012 ©2012 BenchmarkQA, Inc. All rights reserved.

6

SDLC/Methodology

Environment/Technologies

Team/Organization

In Relation to Other Information

Type of Project

Th

e S

of

tw

ar

e Q

ua

lit

y E

xp

er

ts

CONTEXT – SDLC/METHODOLOGY

10/5/2012 ©2012 BenchmarkQA, Inc. All rights reserved.

7

> Waterfall

> Agile

> Multiple phases/releases

> Pilot or prototype initially

> External constraints/factors impacting delivery

Th

e S

of

tw

ar

e Q

ua

lit

y E

xp

er

ts

AGILE SPECTRUM

10/5/2012 ©2012 BenchmarkQA, Inc. All rights reserved.

8

© 2012 Impressum

Th

e S

of

tw

ar

e Q

ua

lit

y E

xp

er

ts

CONTEXT - TYPE OF PROJECT

> In-house

New development

Maintenance

> COTS

> Outsourced

> SaaS

> New delivery platform, e.g. mobile

10/5/2012 ©2012 BenchmarkQA, Inc. All rights reserved.

9

Th

e S

of

tw

ar

e Q

ua

lit

y E

xp

er

ts

CONTEXT - TEAM/ORGANIZATION

> Co-located team vs. distributed team

> Internal vs. external application users

10/5/2012 ©2012 BenchmarkQA, Inc. All rights reserved.

10

Th

e S

of

tw

ar

e Q

ua

lit

y E

xp

er

ts

CONTEXT – ENVIRONMENT/TECHNOLOGIES

> Regulated vs. non-regulated

> Mature vs. leading-edge technologies

10/5/2012 ©2012 BenchmarkQA, Inc. All rights reserved.

11

Th

e S

of

tw

ar

e Q

ua

lit

y E

xp

er

ts

CONTEXT – IN RELATION TO OTHER INFO > We have no open Severity 1 defects.

Can we release? ∙ What if I told you we also have 250 Severity 2

defects?

> We ran 2,000 test cases. Can we release?

∙ What if I told you we ran out of time to test the most recently added feature?

> We covered all of the critical requirements. Can we release?

∙ What if I told you we only had time to run positive tests?

∙ What if I told you we still have 2 open Critical Severity defects?

10/5/2012 ©2012 BenchmarkQA, Inc. All rights reserved.

12

Th

e S

of

tw

ar

e Q

ua

lit

y E

xp

er

ts

CONTEXT – DON’T OVERTHINK IT

10/5/2012 ©2012 BenchmarkQA, Inc. All rights reserved.

13

> SDLC/methodology

> Project type

> Team/organization

> Environment/ technologies

> Metrics in relation to other information

UNDERSTANDING YOUR AUDIENCE

10/5/2012 ©2012 BenchmarkQA, Inc. All rights reserved.

Th

e S

of

tw

ar

e Q

ua

lit

y E

xp

er

ts

ME > QA Lead

Test case building progress by tester

Test execution progress by tester

Defect close rate

Test coverage

> Test Analysts Tests to build

Feature coverage

Test to execute

Defects written by me

Defects assigned to me

10/5/2012 ©2012 BenchmarkQA, Inc. All rights reserved.

15

ME

Th

e S

of

tw

ar

e Q

ua

lit

y E

xp

er

ts

YOU

10/5/2012 ©2012 BenchmarkQA, Inc. All rights reserved.

16

Project Manager

Development Lead

Project Sponsor QA Manager

Business Analyst

Developer Product Owner

Customer ME

YOU

YOU YOU

YOU

YOU

YOU YOU

YOU

Th

e S

of

tw

ar

e Q

ua

lit

y E

xp

er

ts

US

> Project team

> Scrum team

> QA team

10/5/2012 ©2012 BenchmarkQA, Inc. All rights reserved.

17

ME

YOU

YOU

YOU

YOU

YOU

Th

e S

of

tw

ar

e Q

ua

lit

y E

xp

er

ts

THEM

> Enterprise

> Execute Suite / C-level

> Divisional Managers

> Customers

10/5/2012 ©2012 BenchmarkQA, Inc. All rights reserved.

18

ME

THEM

THEM THEM

USE METRICS TO ANSWER A QUESTION

10/5/2012 ©2012 BenchmarkQA, Inc. All rights reserved.

Th

e S

of

tw

ar

e Q

ua

lit

y E

xp

er

ts

ARE WE DONE YET?

> Building Tests

# of Total Tests vs. # Written/Approved

Test Coverage Completeness

> Executing Tests

# of Test Passed/Failed/Blocked/Not Run

Automation vs. Manual

Velocity/Rate of Execution

10/5/2012 ©2012 BenchmarkQA, Inc. All rights reserved.

20

Th

e S

of

tw

ar

e Q

ua

lit

y E

xp

er

ts

10/5/2012 ©2012 BenchmarkQA, Inc. All rights reserved.

21

Th

e S

of

tw

ar

e Q

ua

lit

y E

xp

er

ts

UAT PROCESS EXECUTION DASHBOARD

10/5/2012 ©2012 BenchmarkQA, Inc. All rights reserved.

22

Th

e S

of

tw

ar

e Q

ua

lit

y E

xp

er

ts

QUALITY CENTER TIP – VELOCITY OF TEST

10/5/2012 ©2012 BenchmarkQA, Inc. All rights reserved.

23

Setup Result

Th

e S

of

tw

ar

e Q

ua

lit

y E

xp

er

ts

ARE WE DONE YET? > Sufficient Quality

# of Active Defects by Severity

Quality Bar

10/5/2012 ©2012 BenchmarkQA, Inc. All rights reserved.

24

Feature Critical High Medium Low Total Ready for

Retest

General 0 0 0 0 0 0

Feature 1 0 0 0 0 0 0

Feature 2 0 1 1 0 2 2

Feature 3 0 0 0 0 0 1

Feature 4 0 0 3 1 4 3

Feature 5 0 1 2 0 3 5

Feature 6 0 0 0 0 0 0

Feature 7 3 1 1 0 5 0

Feature 8 0 0 0 0 0 0

Feature 9 2 3 5 0 10 0

Feature 10 0 1 0 0 1 0

Feature 11 2 3 5 2 12 1

Feature 12 - General 0 1 0 0 1 0

Feature 12: Sub 1 0 1 0 0 1 0

Feature 12: Sub 2 0 0 0 0 0 0

Feature 12: Sub 3 0 0 1 0 1 0

Feature 12: Sub 4 0 0 0 0 0 0

Feature 12: Sub 5 0 0 0 0 0 0

Feature 12: Sub 6 0 0 0 0 0 0

Feature 13 0 5 0 0 5 1

Totals 7 17 18 3 45 13

Closed Since 8/18/2011

Closed Deferred Duplicate Ready for

Retest Reported Since

8/18/2011

18 245 9 0 13 5

# Tests Passed

Total # Tests Executed

QUALITY BAR

65%

Th

e S

of

tw

ar

e Q

ua

lit

y E

xp

er

ts

WAS OUR APP/RELEASE GOOD ENOUGH?

> Warranty Period

Quantity & Severity of Post Release Defects

Time for Post Production Fixes

> Defect Removal Rate

> Cost of Production Defects

10/5/2012 ©2012 BenchmarkQA, Inc. All rights reserved.

25

# of pre-release defects

(# of warranty period defects

+ # of pre-release defects)

# hours for prod defects fixes

X average burden rate $$

Th

e S

of

tw

ar

e Q

ua

lit

y E

xp

er

ts

IS QUALITY IMPROVING?

> Trending Analysis

# of pre- and post-release defects found and fixed per release (root cause distribution analysis by release)

> Retrospectives

Burnup/Burndown over multiple sprints

User Stories/Tasks

10/5/2012 ©2012 BenchmarkQA, Inc. All rights reserved.

26

Th

e S

of

tw

ar

e Q

ua

lit

y E

xp

er

ts

HOW EFFECTIVE ARE WE?

> Sprint delivery effectiveness

Burn down/burn up Velocity by Sprint

Retrospective

> Test coverage

> # Tests per resource per hour

10/5/2012 ©2012 BenchmarkQA, Inc. All rights reserved.

27

Th

e S

of

tw

ar

e Q

ua

lit

y E

xp

er

ts

CAN I QUANTIFY THE IMPACT OF AN ISSUE?

> Frequency of Issue Quantity of occurrences

Timeframe of each occurrence

> Severity of Issue Empirical scales (show stopper, critical, high,

medium, low)

Subjective scales

> Impact of Issue Importance to business (priority/severity)

Cost of not fixing

10/5/2012 ©2012 BenchmarkQA, Inc. All rights reserved.

28

Th

e S

of

tw

ar

e Q

ua

lit

y E

xp

er

ts

AUTOMATION ROI

> Simple ROI: monetary savings due to test automation PROs: Good overview for management

CONs: Oversimplified & need resource costs

> Efficiency Automation: time savings resulting from test automation

PROs: Easy to gather data, shows team impacts

CONs: Oversimplified, assumes 100% test execution each cycle

10/5/2012 ©2012 BenchmarkQA, Inc. All rights reserved.

29

ROI= Gains – Investment Costs

Investment Costs

Th

e S

of

tw

ar

e Q

ua

lit

y E

xp

er

ts

SIMPLE ROI

10/5/2012 ©2012 BenchmarkQA, Inc. All rights reserved.

30

0.0%

50.0%

100.0%

150.0%

200.0%

250.0%

300.0%

1 2 3 4

% ROI

Years

Simple ROI

18 Cycles

24 Cycles

30 Cycles

Task Manual Automation Manual Auto

Qty Rate Factor Total $$ QTY Rate Factor Total Initial Cost

Hardware 1 $ 1,000 - $ 1,000 2 $ 1,000 $ 2,000

Software (Initial Costs) 0 $0 - $ - 2 $ 4,000 $ 8,000 $ 1,000 $ 52,500

Software (Maintenance Costs) N/A $0 - $ - 1 $ 8,000 0.2 $ 1,600

Build 500 Test Scripts N/A $0 - $ - 500 $ 85 1 $ 42,500

Execute & Analyze 500 Test Scripts 500 $60 0.17 $ 5,100 1 $ 85 4 $ 340

Maintain 500 Test Scripts N/A $0 - $ - 1 $ 85 8 $ 680

Execute Manual Test Suite (less Automated)

1000 $0 0.17 Time to Execute Automation

0.03

Total Cost of 24 Cycles of Manual Testing $ 123,400

Total Cost of 24 Cycles of Test Automation $ 78,580

Th

e S

of

tw

ar

e Q

ua

lit

y E

xp

er

ts

EFFICIENCY ROI

10/5/2012 ©2012 BenchmarkQA, Inc. All rights reserved.

31

0.0%

5.0%

10.0%

15.0%

20.0%

25.0%

30.0%

35.0%

1 2 3 4

% ROI

Years

Efficiency ROI

18 Cycles

24 Cycles

30 Cycles

Task Manual Automation Hours per Day

Qty Rate Factor Total Days QTY Rate Factor Total Days Manual Auto

Hardware 1 $ 1,000 - 2 $ 1,000 8 18

Software (Initial Costs) 0 $0 - 2 $ 4,000

Software (Maintenance Costs) N/A $0 - 1 $ 8,000 0.2

Build 500 Test Scripts N/A $0 - 500 $ 85 1 62.5

Execute & Analyze 500 Test Scripts 500 $60 0.17

10.6 1 $ 85 4

0.5

Maintain 500 Test Scripts N/A $0 - 1 $ 85 8 1.0

Execute Manual Test Suite (less Automated)

1000 $0 0.17

21.3 Time to Execute Automation

0.03 0.8

Total Time in Days of 24 Cycles of Manual Testing

765.0 Total Time in Days of 24 Cycles of Test Automation

628.5

Th

e S

of

tw

ar

e Q

ua

lit

y E

xp

er

ts

ACTUAL VS. PROJECTED ROI

10/5/2012 ©2012 BenchmarkQA, Inc. All rights reserved.

32

-150.0%

-100.0%

-50.0%

0.0%

50.0%

100.0%

150.0%

Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec

Year 1 Year 1

% ROI

Month

Actual vs. Projected ROI

Projected ROI

Actual ROI

• Visibility for ongoing value of automation • Better understand/plan ROI for other

automation efforts • Combat long term automation malaise

THANK YOU FOR ATTENDING! For more information about BenchmarkQA and the services we offer, please contact:

Molly Decklever

9523.392.2384

molly.decklever@benchmarkqa.com

10/5/2012 ©2012 BenchmarkQA, Inc. All rights reserved.