The Economics of Software Reliability

22
1 11/19/03 ©USC-CSE 1 University of Southern California Center for Software Engineering C S E USC Barry Boehm, USC ISSRE 2003 Keynote Address November 19, 2003 ([email protected]) (http://sunset.usc.edu) The Economics of Software Reliability 11/19/03 ©USC-CSE 2 University of Southern California Center for Software Engineering C S E USC Outline The business case for software reliability Example: net value of test aids Value depends on stakeholder value propositions What are stakeholders really relying on? Safety, accuracy, response time, … Attributes often conflict Software dependability in a competitive world Conclusions

Transcript of The Economics of Software Reliability

Page 1: The Economics of Software Reliability

1

11/19/03 ©USC-CSE 1

University of Southern CaliforniaCenter for Software EngineeringC S E

USC

Barry Boehm, USC

ISSRE 2003 Keynote Address

November 19, 2003([email protected])

(http://sunset.usc.edu)

The Economics of Software Reliability

11/19/03 ©USC-CSE 2

University of Southern CaliforniaCenter for Software EngineeringC S E

USC

Outline

• The business case for software reliability– Example: net value of test aids– Value depends on stakeholder value propositions

• What are stakeholders really relying on?– Safety, accuracy, response time, …– Attributes often conflict

• Software dependability in a competitive world• Conclusions

Page 2: The Economics of Software Reliability

2

11/19/03 ©USC-CSE 3

University of Southern CaliforniaCenter for Software EngineeringC S E

USC

Software Reliability Business Case• Software reliability investments compete for

resources– With investments in functionality, response time,

adaptability, speed of development, …

• Each investment option generates curves of net value and ROI– Net Value NV = PV(benefit flows-cost flows)– Present Value PV = Flows at future times discounted– Return on Investment ROI = NV/PV(cost flows)

• Value of benefits varies by stakeholder and role

11/19/03 ©USC-CSE 4

University of Southern CaliforniaCenter for Software EngineeringC S E

USC

Software Testing Business Case

• Vendor proposition– Our test data generator will cut your test costs in

half– We’ll provide it to you for 30% of your test costs– After you run all your tests for 50% of your

original costs, you’re 20% ahead

• Any concerns with vendor proposition?

Page 3: The Economics of Software Reliability

3

11/19/03 ©USC-CSE 5

University of Southern CaliforniaCenter for Software EngineeringC S E

USC

Software Testing Business Case

• Vendor proposition– Our test data generator will cut your test costs in half– We’ll provide it to you for 30% of your test costs– After you run all your tests for 50% of your original costs,

you’re 20% ahead

• Any concerns with vendor proposition?– Test data generator is value-neutral*– Every test case, defect is equally important– Usually, 20% of test cases cover 80% of business value

* As are most current software engineering techniques

11/19/03 ©USC-CSE 6

University of Southern CaliforniaCenter for Software EngineeringC S E

USC

20% of Features Provide 80% of Value: Focus Testing on These (Bullock, 2000)

% of Valuefor

CorrectCustomer

Billing

Customer Type

100

80

60

40

20

5 10 15

Automated test generation tool- all tests have equal value

Page 4: The Economics of Software Reliability

4

11/19/03 ©USC-CSE 7

University of Southern CaliforniaCenter for Software EngineeringC S E

USC

Value-Based Testing Provides More Net Value

Net Value

NV

60

40

20

0

-20

20 40 10060 80

-40

(100, 20)

Percent of tests run

Test Data Generator

Value-Based Testing(30, 58)

0100100+2010080100

….….….….….….….

549440-10405040

588830-15304530

557520-20204020

405010-25103510

000-300300

NVValueCostNVValueCost

Value-Based TestingTest Data Generator% Tests

11/19/03 ©USC-CSE 8

University of Southern CaliforniaCenter for Software EngineeringC S E

USC

There is No Universal Dependability-Value Metric

• Different stakeholders rely on different value attributes– Protection: safety, security, privacy– Robustness: reliability, availability, survivability– Quality of Service: performance, accuracy, ease of use– Adaptability: evolvability, interoperability– Affordability: cost, schedule, reusability

• Value attributes continue to tier down– Performance: response time, resource consumption (CPU, memory,

comm.)

• Value attributes are scenario-dependent– 5 seconds normal response time; 2 seconds in crisis

• Value attributes often conflict– Most often with performance and affordability

Page 5: The Economics of Software Reliability

5

11/19/03 ©USC-CSE 9

University of Southern CaliforniaCenter for Software EngineeringC S E

USC

Major Information System Dependability Stakeholders

•Information Suppliers-citizens, companies

•Information Brokers

•Information Consumers

-financial services,

-decisions, education,

•Mission Controllers

Developers, Acquirers, Administrators

•Dependents

Information System

- passengers, patients

news media

entertainment

-pilots, distributioncontrollers

11/19/03 ©USC-CSE 10

University of Southern CaliforniaCenter for Software EngineeringC S E

USC

Overview of Stakeholder/Value Dependencies

• Strength of direct dependency on value attribute**- Critical ; *-Significant; blank-insignificant or indirect

AttributesStakeholders

**

*

** ** **

**

** **

**

**

**

***

*

*

**

* *

*

Prote

ction

Robus

tness

Quality

of S

ervic

e

Adapta

bility

Afford

abilit

y

Developers, AcquirersMission Controllers, Administrators

Info. ConsumersInfo. Brokers

Info. Suppliers, Dependents

Page 6: The Economics of Software Reliability

6

11/19/03 ©USC-CSE 11

University of Southern CaliforniaCenter for Software EngineeringC S E

USC

D-Attr ibu tes

Info

Su

pp

liers

Dep

end

ents

Info

Bro

kers

Co

nsu

mers

Info

Missio

n C

on

trollers

Develo

pers, M

ainten

ance

Ad

min

istrators

Acq

uirers

Mi ssi on -Protection cri t uncri t

Sa fe ty **Securi ty * ** **Pri vacy ** *

RobustrossRe li ab i l i ty ** ** ** **Ava i l ab i l i ty * ** ** ** ** **Survivabi l i ty ** ** ** **

Qual i ty o f ServicePerfo rm ance ** ** ** ** *-**Accuracy, Consi stency ** ** ** * ** *Accessib i l i ty, ease o f use ; ** ** ** ** ** **d iff icul ty o f m isuse

Evo lvab i l i ty ** ** * * ** ** **In teroperab i l i ty ** ** **

Cost * **Schedu le * ** **Reusabi l i ty ** * *

Stakeholders

Elaboration of Stakeholder/Value Dependencies

11/19/03 ©USC-CSE 12

University of Southern CaliforniaCenter for Software EngineeringC S E

USC

Implications for Dependability Engineering

• There is no universal dependability metric to optimize

• Need to identify system’s success-critical stakeholders– And their dependability priorities

• Need to balance satisfaction of stakeholder dependencies– Stakeholder win-win negotiation– Dependability attribute tradeoff analysis

• Need value-of-dependability models, methods, and tools

Page 7: The Economics of Software Reliability

7

11/19/03 ©USC-CSE 13

University of Southern CaliforniaCenter for Software EngineeringC S E

USC

Outline

• The business case for software reliability• What are stakeholders really relying on?• Software dependability in a competitive world

– Is market success a monotone function of dependability?

– Is quality really free?– Is “ faster, better, cheaper” really achievable?– Value-based vs. value-neutral methods

• Conclusions

11/19/03 ©USC-CSE 14

University of Southern CaliforniaCenter for Software EngineeringC S E

USC

Competing on Cost and Quality- adapted from Michael Porter, Harvard

Too cheap (low Q)

Costleader

(acceptable Q)

Stuck inthe middle Quality

leader(acceptable C)

Over-priced

Priceor CostPoint

ROI

Page 8: The Economics of Software Reliability

8

11/19/03 ©USC-CSE 15

University of Southern CaliforniaCenter for Software EngineeringC S E

USC

Competing on Schedule and Quality- A risk analysis approach

• Risk Exposure RE = Prob (Loss) * Size (Loss)

– “ Loss” – financial; reputation; future prospects, …

• For multiple sources of loss:

sourcesRE = ΣΣΣΣ [Prob (Loss) * Size (Loss)]source

11/19/03 ©USC-CSE 16

University of Southern CaliforniaCenter for Software EngineeringC S E

USC

Example RE Profile: Time to Ship- Loss due to unacceptable dependability

Time to Ship (amount of testing)

RE =P(L) * S(L)

Many defects: high P(L)Crit ical defects: high S(L)

Few defects: low P(L)Minor defects: low S(L)

Page 9: The Economics of Software Reliability

9

11/19/03 ©USC-CSE 17

University of Southern CaliforniaCenter for Software EngineeringC S E

USC

Example RE Profile: Time to Ship- Loss due to unacceptable dependability

- Loss due to market share erosion

Time to Ship (amount of testing)

RE =P(L) * S(L)

Few rivals: low P(L)Weak rivals: low S(L)

Many r ivals: high P(L)Strong r ivals: high S(L)

Many defects: high P(L)Crit ical defects: high S(L)

Few defects: low P(L)Minor defects: low S(L)

11/19/03 ©USC-CSE 18

University of Southern CaliforniaCenter for Software EngineeringC S E

USC

Example RE Profile: Time to Ship- Sum of Risk Exposures

Time to Ship (amount of testing)

RE =P(L) * S(L)

Few rivals: low P(L)Weak rivals: low S(L)

Many r ivals: high P(L)Strong r ivals: high S(L)

SweetSpot

Many defects: high P(L)Crit ical defects: high S(L)

Few defects: low P(L)Minor defects: low S(L)

Page 10: The Economics of Software Reliability

10

11/19/03 ©USC-CSE 19

University of Southern CaliforniaCenter for Software EngineeringC S E

USC

Comparative RE Profile: Safety-Critical System

Time to Ship (amount of testing)

RE =P(L) * S(L)

Mainstream Sweet

Spot

Higher S(L): defects

High-QSweetSpot

11/19/03 ©USC-CSE 20

University of Southern CaliforniaCenter for Software EngineeringC S E

USC

Comparative RE Profile: Internet Startup

Time to Ship (amount of testing)

RE =P(L) * S(L)

Mainstream SweetSpot

Higher S(L): delaysLow-TTM

SweetSpot

TTM:Time to Market

Page 11: The Economics of Software Reliability

11

11/19/03 ©USC-CSE 21

University of Southern CaliforniaCenter for Software EngineeringC S E

USC

Interim Conclusions• Unwise to try to compete on both

cost/schedule and quality– Some exceptions: major technology or marketplace

edge

• There are no one-size-fits-all cost/schedule/quality strategies

• Risk analysis helps determine how much testing (prototyping, formal verification, etc.) is enough– Buying information to reduce risk

• Often difficult to determine parameter values– Some COCOMO II values discussed next

11/19/03 ©USC-CSE 22

University of Southern CaliforniaCenter for Software EngineeringC S E

USC

0.8 0.9 1.0 1.1 1.2 1.3

Slight inconvenience

Low, easily recoverable

loss

Moderate recoverable

loss

High Financial

Loss

Loss of Human Life

1 month

1 day

2 years

100 years

Defect Risk Rough MTBF(mean time between failures)

In-house support software1.0

Relative Cost/Source Instruction

Software Development Cost/Quality Tradeoff- COCOMO II calibration to 161 projects

1 hour

High

RELYRating

VeryHigh

Nominal

Low

VeryLow

Page 12: The Economics of Software Reliability

12

11/19/03 ©USC-CSE 23

University of Southern CaliforniaCenter for Software EngineeringC S E

USC

0.8 0.9 1.0 1.1 1.2 1.3

Slight inconvenience

Low, easily recoverable

loss

Moderate recoverable

loss

High Financial

Loss

Loss of Human Life

1 month

1 day

2 years

100 years

Defect Risk Rough MTBF(mean time between failures)

Commercial quality leader

1.10

In-house support software1.0

Commercial cost leader

0.92

Relative Cost/Source Instruction

Software Development Cost/Quality Tradeoff- COCOMO II calibration to 161 projects

1 hour

High

RELYRating

VeryHigh

Nominal

Low

VeryLow

11/19/03 ©USC-CSE 24

University of Southern CaliforniaCenter for Software EngineeringC S E

USC

0.8 0.9 1.0 1.1 1.2 1.3

Slight inconvenience

(1 hour)

Low, easily recoverable

loss

Moderate recoverable

loss

High Financial

Loss

Loss of Human Life

1 month

1 day

2 years

100 years

Defect Risk Rough MTBF(mean time between failures)

Commercial quality leader

1.10

In-house support software1.0

Commercial cost leader

0.92

0.82

Startup demo

Safety-critical

1.26

Relative Cost/Source Instruction

Software Development Cost/Quality Tradeoff- COCOMO II calibration to 161 projects

High

RELYRating

VeryHigh

Nominal

Low

VeryLow

Page 13: The Economics of Software Reliability

13

11/19/03 ©USC-CSE 25

University of Southern CaliforniaCenter for Software EngineeringC S E

USC

“ Quality is Free”

• Did Philip Crosby’s book get it all wrong?

• Investments in dependable systems

– Cost extra for simple, short-life systems

– Pay off for high-value, long-life systems

11/19/03 ©USC-CSE 26

University of Southern CaliforniaCenter for Software EngineeringC S E

USC

Software Life-Cycle Cost vs. Dependability

0.8

VeryLow

Low Nominal HighVeryHigh

0.9

1.0

1.1

1.2

1.3

1.4

1.10

1.0

0.92

1.26

0.82

Relative Cost to Develop

COCOMO II RELY Rating

Page 14: The Economics of Software Reliability

14

11/19/03 ©USC-CSE 27

University of Southern CaliforniaCenter for Software EngineeringC S E

USC

Software Life-Cycle Cost vs. Dependability

0.8

VeryLow

Low Nominal HighVeryHigh

0.9

1.0

1.1

1.2

1.3

1.4

1.10

0.92

1.26

0.82

Relative Cost to Develop, Maintain

COCOMO II RELY Rating

1.23

1.10

0.99

1.07

11/19/03 ©USC-CSE 28

University of Southern CaliforniaCenter for Software EngineeringC S E

USC

Software Life-Cycle Cost vs. Dependability

0.8

VeryLow

Low Nominal HighVeryHigh

0.9

1.0

1.1

1.2

1.3

1.4

1.10

0.92

1.26

0.82

Relative Cost to Develop, Maintain

COCOMO II RELY Rating

1.23

1.10

0.99

1.07

1.11

1.05

70% Maint.

1.07

1.20

Page 15: The Economics of Software Reliability

15

11/19/03 ©USC-CSE 29

University of Southern CaliforniaCenter for Software EngineeringC S E

USC

Software Ownership Cost vs. Dependability

0.8

VeryLow

Low Nominal HighVeryHigh

0.9

1.0

1.1

1.2

1.3

1.4

1.10

0.92

1.26

0.82

Relative Cost to Develop, Maintain,Own andOperate

COCOMO II RELY Rating

1.23

1.10

0.99

1.07

1.11

1.05

70% Maint.

1.07

1.20

0.760.69

VL = 2.55L = 1.52

Operational-defect cost at Nominal dependabili ty= Software life cycle cost

Operational -defect cost = 0

11/19/03 ©USC-CSE 30

University of Southern CaliforniaCenter for Software EngineeringC S E

USC

Outline• The business case for software reliability• What are stakeholders really relying on?• Software dependability in a competitive world

– Is market success a monotone function of dependability?

– Is quality really free?– Is “ faster, better, cheaper” really achievable?– Value-based vs. value-neutral methods

• Conclusions

Page 16: The Economics of Software Reliability

16

11/19/03 ©USC-CSE 31

University of Southern CaliforniaCenter for Software EngineeringC S E

USC

Cost, Schedule, Quality: Pick any Two?

C

QS

11/19/03 ©USC-CSE 32

University of Southern CaliforniaCenter for Software EngineeringC S E

USC

Cost, Schedule, Quality: Pick any Two?

C

QS

C

QS

• Consider C, S, Q as Independent Variable– Feature Set as Dependent Variable

Page 17: The Economics of Software Reliability

17

11/19/03 ©USC-CSE 33

University of Southern CaliforniaCenter for Software EngineeringC S E

USC

C, S, Q as Independent Variable• Determine Desired Delivered Defect Density (D4)

– Or a value-based equivalent• Prioritize desired features

– Via QFD, IPT, stakeholder win-win• Determine Core Capability

– 90% confidence of D4 within cost and schedule– Balance parametric models and expert judgment

• Architect for ease of adding next-priority features– Hide sources of change within modules (Parnas)

• Develop core capability to D4 quality level– Usually in less than available cost and schedule

• Add next priority features as resources permit• Versions used successfully on 32 of 34 USC digital library

projects

11/19/03 ©USC-CSE 34

University of Southern CaliforniaCenter for Software EngineeringC S E

USC

Value-Based vs. Value Neutral Methods

• Value-based defect reduction• Constructive Quality Model (COQUALMO)• Information Dependability Attribute Value

Estimation (iDAVE) model

Page 18: The Economics of Software Reliability

18

11/19/03 ©USC-CSE 35

University of Southern CaliforniaCenter for Software EngineeringC S E

USC

Value-Based Defect Reduction Example:

Goal-Question-Metric (GQM) Approach

Goal: Our supply chain software packages have too many defects. We need to get their defect rates down

Question: ?

11/19/03 ©USC-CSE 36

University of Southern CaliforniaCenter for Software EngineeringC S E

USC

Value-Based GQM Approach – IQ: How do software defects affect system value

goals?ask why initiative is needed

- Order processing- Too much downtime on operations critical path- Too many defects in operational plans- Too many new-release operational problems

G: New system-level goal: Decreasesoftware-defect-related losses in operationaleffectiveness

- With high-leverage problem areas above as specific subgoals

New Q: ?

Page 19: The Economics of Software Reliability

19

11/19/03 ©USC-CSE 37

University of Southern CaliforniaCenter for Software EngineeringC S E

USC

Value-Based GQM Approach – IINew Q: Perform system problem-area root cause analysis:

ask why problems are happening via modelsExample: Downtime on critical path

Order Validateorder

Produce status reports

Schedule packaging,

delivery

Validateitems in

stockitems

Prepare delivery

packages

Deliver

order

• Where are primary software-defect-related delays?• Where are biggest improvement-leverage areas?

– Reducing software defects in Scheduling module– Reducing non-software order-validation delays– Taking Status Reporting off the critical path– Downstream, getting a new Web-based order entry system

• Ask “ why not?” as well as “ why?”

11/19/03 ©USC-CSE 38

University of Southern CaliforniaCenter for Software EngineeringC S E

USC

Value-Based GQM Results• Defect tracking weighted by system-value priority

– Focuses defect removal on highest-value effort

• Significantly higher effect on bottom-line business value– And on customer satisfaction levels

• Engages software engineers in system issues– Fits increasing system-criticality of software

• Strategies often helped by quantitative models– COQUALMO, iDAVE

Page 20: The Economics of Software Reliability

20

11/19/03 ©USC-CSE 39

University of Southern CaliforniaCenter for Software EngineeringC S E

USC

COCOMO II

Current COQUALMO System

COQUALMO

DefectIntroduction

Model

DefectRemoval

Model

Software platform, Project, product and personnel attributes

Software Size Estimate

Defect removal profile levelsAutomation, Reviews, Testing

Software development effort, cost and schedule estimate

Number of residual defectsDefect density per unit of size

11/19/03 ©USC-CSE 40

University of Southern CaliforniaCenter for Software EngineeringC S E

USC

Defect Removal Rating Scales

Highly advanced

tools, model-based test

More advance test tools,

preparation.Dist-

monitoring

Well-defined test seq. and

basic test coverage tool

system

Basic testTest criteria

based on checklist

Ad-hoc test and debug

No testingExecution Testing and

Tools

Extensive rev iew

checklistStatistical

control

Root cause analysis,

formal followUsing

historical data

Formal review roles and Well-trained people

and basic checklist

Well-defined preparation,

rev iew, minimal

follow-up

Ad-hoc informal walk-

through

No peer rev iew

Peer Reviews

Formalized specification, verification.Advanced

dist-processing

More elaborate

req./designBasic dist-processing

Intermediate-level module

Simple req./design

Compiler extension

Basic req. and design

consistency

Basic compiler capabilit ies

Simple compiler syntax

checking

Automated Analysis

Extra HighVery HighHighNominalLowVery Low

COCOMO II p.263

Page 21: The Economics of Software Reliability

21

11/19/03 ©USC-CSE 41

University of Southern CaliforniaCenter for Software EngineeringC S E

USC

Defect Removal Estimates- Nominal Defect Introduction Rates

60

28.5

14.37.5

3.5 1.60

10

20

30

40

50

60

70

VL Low Nom High VH XH

Delivered Defects/ KSLOC

Composite Defect Removal Rating

11/19/03 ©USC-CSE 42

University of Southern CaliforniaCenter for Software EngineeringC S E

USC

iDAVE Model

Page 22: The Economics of Software Reliability

22

11/19/03 ©USC-CSE 43

University of Southern CaliforniaCenter for Software EngineeringC S E

USC

ROI Analysis Results Comparison - Bu sin ess Appl i cat ion vs. Space Appl ication

0. 32

- 0. 98- 2

0

2

4

6

8

10

12

14

16

N- >H H- >VH VH- >XH

Dependability Investment Levels

RO

I

Sierra Order Processing Planetary Rover

11/19/03 ©USC-CSE 44

University of Southern CaliforniaCenter for Software EngineeringC S E

USC

Conclusions: Software Reliability (SWR)• There is no universal SWR metric to optimize

– Need to balance stakeholder SWR value propositions• Increasing need for value-based approaches to SWR

– Methods and models emerging to address needs• “ Faster, better, cheaper” is feasible

– If feature content can be a dependent variable• “ Quality is free” for stable, high-value, long-life systems

– But not for dynamic, lower-value, short life systems• Future trends intensify SWR needs and challenges

– Criticality, complexity, decreased control, faster change