University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing...

52
University of Southern California Center for Software Engineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC Workshop January 10, 2001 (boehm@; http://) sunset.usc.edu

Transcript of University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing...

Page 1: University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC.

University of Southern CaliforniaCenter for SoftwareEngineering CeBASE

High Dependability Computing in a Competitive World

Barry Boehm, USC

NASA/CMU HDC Workshop

January 10, 2001

(boehm@; http://) sunset.usc.edu

Page 2: University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC.

01/10/01 ©USC-CSE 2

University of Southern CaliforniaCenter for SoftwareEngineering CeBASE

HDC in a Competitive World

• The economics of IT competition and dependability

• Software Dependability Opportunity Tree– Decreasing defects

– Decreasing defect impact

– Continuous improvement

– Attacking the future

• Conclusions and References

Page 3: University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC.

01/10/01 ©USC-CSE 3

University of Southern CaliforniaCenter for SoftwareEngineering CeBASE

Competing on Cost and Quality- adapted from Michael Porter, Harvard

Under-priced Cost

leader(acceptable Q)

Stuck inthe middle Quality

leader(acceptable C)

Over-priced

Priceor CostPoint

ROI

Page 4: University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC.

01/10/01 ©USC-CSE 4

University of Southern CaliforniaCenter for SoftwareEngineering CeBASE

Competing on Schedule and Quality- A risk analysis approach

• Risk Exposure RE = Prob (Loss) * Size (Loss)

– “Loss” – financial; reputation; future prospects, …

• For multiple sources of loss:

sources

RE = [Prob (Loss) * Size (Loss)]source

Page 5: University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC.

01/10/01 ©USC-CSE 5

University of Southern CaliforniaCenter for SoftwareEngineering CeBASE

Example RE Profile: Time to Ship- Loss due to unacceptable dependability

Time to Ship (amount of testing)

RE =P(L) * S(L)

Many defects: high P(L)Critical defects: high S(L)

Few defects: low P(L)Minor defects: low S(L)

Page 6: University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC.

01/10/01 ©USC-CSE 6

University of Southern CaliforniaCenter for SoftwareEngineering CeBASE

Example RE Profile: Time to Ship- Loss due to unacceptable dependability

- Loss due to market share erosion

Time to Ship (amount of testing)

RE =P(L) * S(L)

Few rivals: low P(L)Weak rivals: low S(L)

Many rivals: high P(L)Strong rivals: high S(L)

Many defects: high P(L)Critical defects: high S(L)

Few defects: low P(L)Minor defects: low S(L)

Page 7: University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC.

01/10/01 ©USC-CSE 7

University of Southern CaliforniaCenter for SoftwareEngineering CeBASE

Example RE Profile: Time to Ship- Sum of Risk Exposures

Time to Ship (amount of testing)

RE =P(L) * S(L)

Few rivals: low P(L)Weak rivals: low S(L)

Many rivals: high P(L)Strong rivals: high S(L)

SweetSpot

Many defects: high P(L)Critical defects: high S(L)

Few defects: low P(L)Minor defects: low S(L)

Page 8: University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC.

01/10/01 ©USC-CSE 8

University of Southern CaliforniaCenter for SoftwareEngineering CeBASE

Comparative RE Profile: Safety-Critical System

Time to Ship (amount of testing)

RE =P(L) * S(L)

Mainstream Sweet

Spot

Higher S(L): defects

High-QSweetSpot

Page 9: University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC.

01/10/01 ©USC-CSE 9

University of Southern CaliforniaCenter for SoftwareEngineering CeBASE

Comparative RE Profile: Internet Startup

Time to Ship (amount of testing)

RE =P(L) * S(L)

Mainstream Sweet

Spot

Higher S(L): delaysLow-TTM

SweetSpot

TTM:Time to Market

Page 10: University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC.

01/10/01 ©USC-CSE 10

University of Southern CaliforniaCenter for SoftwareEngineering CeBASE

Conclusions So Far• Unwise to try to compete on both cost/schedule

and quality– Some exceptions: major technology or marketplace

edge• There are no one-size-fits-all cost/schedule/quality

strategies• Risk analysis helps determine how much testing

(prototyping, formal verification, etc.) is enough– Buying information to reduce risk

• Often difficult to determine parameter values– Some COCOMO II values discussed next

Page 11: University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC.

01/10/01 ©USC-CSE 11

University of Southern CaliforniaCenter for SoftwareEngineering CeBASE

0.8 0.9 1.0 1.1 1.2 1.3

Slight inconvenience

Low, easily recoverable

loss

Moderate recoverable

loss

High Financial

Loss

Loss of Human Life

1 month

1 day

2 years

100 years

Defect Risk Rough MTBF(mean time between failures)

In-house support software

1.0

Relative Cost/Source Instruction

Software Development Cost/Quality Tradeoff - COCOMO II calibration to 161 projects

1 hour

High

RELYRating

VeryHigh

Nominal

Low

VeryLow

Page 12: University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC.

01/10/01 ©USC-CSE 12

University of Southern CaliforniaCenter for SoftwareEngineering CeBASE

0.8 0.9 1.0 1.1 1.2 1.3

Slight inconvenience

Low, easily recoverable

loss

Moderate recoverable

loss

High Financial

Loss

Loss of Human Life

1 month

1 day

2 years

100 years

Defect Risk Rough MTBF(mean time between failures)

Commercial quality leader

1.10

In-house support software

1.0

Commercial cost leader

0.92

Relative Cost/Source Instruction

Software Development Cost/Quality Tradeoff - COCOMO II calibration to 161 projects

1 hour

High

RELYRating

VeryHigh

Nominal

Low

VeryLow

Page 13: University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC.

01/10/01 ©USC-CSE 13

University of Southern CaliforniaCenter for SoftwareEngineering CeBASE

0.8 0.9 1.0 1.1 1.2 1.3

Slight inconvenience

(1 hour)

Low, easily recoverable

loss

Moderate recoverable

loss

High Financial

Loss

Loss of Human Life

1 month

1 day

2 years

100 years

Defect Risk Rough MTBF(mean time between failures)

Commercial quality leader

1.10

In-house support software

1.0

Commercial cost leader

0.92

0.82

Startup demo

Safety-critical

1.26

Relative Cost/Source Instruction

Software Development Cost/Quality Tradeoff - COCOMO II calibration to 161 projects

High

RELYRating

VeryHigh

Nominal

Low

VeryLow

Page 14: University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC.

01/10/01 ©USC-CSE 14

University of Southern CaliforniaCenter for SoftwareEngineering CeBASE

COCOMO II RELY Factor Dispersion

0

20

40

60

80

100

VeryLow

Low Nominal High VeryHigh

t = 2.6t > 1.9 significant

Page 15: University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC.

01/10/01 ©USC-CSE 15

University of Southern CaliforniaCenter for SoftwareEngineering CeBASE

COCOMO II RELY Factor Phenomenology

RELY = Very Low

RELY = Very High

Rqts. and Product Design

Detailed Design

Code and Unit test

Integration and test

Little detailMany TBDsLittle verificationMinimal QA, CM, standards, draft user manual, test plansMinimal PDR

Detailed verification, QA, CM, standards, PDR, documentation IV & V interfaceVery detailed test plans, procedures

Basic design informationMinimal QA, CM, standards, draft user manual, test plansInformal design inspections

Detailed verification, QA, CM, standards, CDR, documentation Very thorough design inspectionsVery detailed test plans, proceduresIV & V interfaceLess rqts. rework

Detailed test procedures, QA, CM, documentation Very thorough code inspectionsVery extensive off- nominal testsIV & V interfaceLess rqts., design rework

No test proceduresMinimal path test, standards checkMinimal QA, CM Minimal I/O and off- nominal tests Minimal user manual

No test proceduresMany requirements untestedMinimal QA, CM Minimal stress and off-nominal tests Minimal as-built documentation

Very detailed test procedures, QA, CM, documentation Very extensive stress and off-nominal tests IV & V interfaceLess rqts., design, code rework

Page 16: University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC.

01/10/01 ©USC-CSE 16

University of Southern CaliforniaCenter for SoftwareEngineering CeBASE

“Quality is Free”

• Did Philip Crosby’s book get it all wrong?

• Investments in dependable systems

– Cost extra for simple, short-life systems

– Pay off for high-value, long-life systems

Page 17: University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC.

01/10/01 ©USC-CSE 17

University of Southern CaliforniaCenter for SoftwareEngineering CeBASE

Software Life-Cycle Cost vs. Dependability

0.8

VeryLow

Low Nominal HighVeryHigh

0.9

1.0

1.1

1.2

1.3

1.4

1.10

1.0

0.92

1.26

0.82

Relative Cost to Develop

COCOMO II RELY Rating

Page 18: University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC.

01/10/01 ©USC-CSE 18

University of Southern CaliforniaCenter for SoftwareEngineering CeBASE

Software Life-Cycle Cost vs. Dependability

0.8

VeryLow

Low Nominal HighVeryHigh

0.9

1.0

1.1

1.2

1.3

1.4

1.10

0.92

1.26

0.82

Relative Cost to Develop, Maintain

COCOMO II RELY Rating

1.23

1.10

0.99

1.07

Page 19: University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC.

01/10/01 ©USC-CSE 19

University of Southern CaliforniaCenter for SoftwareEngineering CeBASE

Software Life-Cycle Cost vs. Dependability

0.8

VeryLow

Low Nominal HighVeryHigh

0.9

1.0

1.1

1.2

1.3

1.4

1.10

0.92

1.26

0.82

Relative Cost to Develop, Maintain

COCOMO II RELY Rating

1.23

1.10

0.99

1.07

1.11

1.05

70% Maint.

1.07

1.20

• Low-dependability inadvisable for evolving systems

Page 20: University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC.

01/10/01 ©USC-CSE 20

University of Southern CaliforniaCenter for SoftwareEngineering CeBASE

Software Ownership Cost vs. Dependability

0.8

VeryLow

Low Nominal HighVeryHigh

0.9

1.0

1.1

1.2

1.3

1.4

1.10

0.92

1.26

0.82

Relative Cost to Develop, Maintain,Own andOperate

COCOMO II RELY Rating

1.23

1.10

0.99

1.07

1.11

1.05

70% Maint.

1.07

1.20

0.760.69

VL = 2.55 L = 1.52

Operational-defect cost at Nominal dependability= Software life cycle cost

Operational -defect cost = 0

Page 21: University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC.

01/10/01 ©USC-CSE 21

University of Southern CaliforniaCenter for SoftwareEngineering CeBASE

Conclusions So Far - 2

• Quality is better than free for high-value, long-life systems

• There is no universal dependability sweet spot– Yours will be determined by your value model

– And the relative contributions of dependability techniques

– Let’s look at these next

Page 22: University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC.

01/10/01 ©USC-CSE 22

University of Southern CaliforniaCenter for SoftwareEngineering CeBASE

HDC in a Competitive World

• The economics of IT competition and dependability

• Software Dependability Opportunity Tree– Decreasing defects

– Decreasing defect impact

– Continuous improvement

– Attacking the future

• Conclusions

Page 23: University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC.

01/10/01 ©USC-CSE 23

University of Southern CaliforniaCenter for SoftwareEngineering CeBASE

Software Dependability Opportunity Tree

DecreaseDefectRiskExposure

Continuous Improvement

DecreaseDefectImpact,Size (Loss)

DecreaseDefectProb (Loss)

Defect Prevention

Defect Detectionand Removal

Value/Risk - BasedDefect Reduction

Graceful Degradation

CI Methods and Metrics

Process, Product, People

Technology

Page 24: University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC.

01/10/01 ©USC-CSE 24

University of Southern CaliforniaCenter for SoftwareEngineering CeBASE

Software Defect Prevention Opportunity Tree

IPT, JAD, WinWin,…

PSP, Cleanroom, Dual development,…

Manual execution, scenarios,...

Staffing for dependability

Rqts., Design, Code,…

Interfaces, traceability,…

Checklists

DefectPrevention

Peoplepractices

Standards

LanguagesPrototypingModeling & SimulationReuseRoot cause analysis

Page 25: University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC.

01/10/01 ©USC-CSE 25

University of Southern CaliforniaCenter for SoftwareEngineering CeBASE

People Practices: Some Empirical Data

• Cleanroom: Software Engineering Lab– 25-75% reduction in failure rates– 5% vs 60% of fix efforts over 1 hour

• Personal Software Process/Team Software Process– 50-75% defect reduction in CMM Level 5 organization– Even higher reductions for less mature organizations

• Staffing– Many experiments find factor-of-10 differences in people’s

defect rates

Page 26: University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC.

01/10/01 ©USC-CSE 26

University of Southern CaliforniaCenter for SoftwareEngineering CeBASE

Root Cause Analysis

• Each defect found can trigger five analyses:– Debugging: eliminating the defect

– Regression: ensuring that the fix doesn’t create new defects

– Similarity: looking for similar defects elsewhere

– Insertion: catching future similar defects earlier

– Prevention: finding ways to avoid such defects

• How many does your organization do?

Page 27: University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC.

01/10/01 ©USC-CSE 27

University of Southern CaliforniaCenter for SoftwareEngineering CeBASE

Pareto 80-20 Phenomena

• 80% of the rework comes from 20% of the defects

• 80% of the defects come from 20% of the modules– About half the modules are defect-free

• 90% of the downtime comes from < 10% of the defects

Page 28: University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC.

01/10/01 ©USC-CSE 28

University of Southern CaliforniaCenter for SoftwareEngineering CeBASE

Pareto Analysis of Rework Costs

0102030405060708090

100

0 10 20 30 40 50 60 70 80 90 100

% of Software Problem Reports (SPR’s)

TRW Project A373 SPR’s

TRW Project B1005 SPR’s

% ofCosttoFixSPR’s

Major Rework Sources:Off-Nominal Architecture-BreakersA - Network FailoverB - Extra-Long Messages

Page 29: University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC.

01/10/01 ©USC-CSE 29

University of Southern CaliforniaCenter for SoftwareEngineering CeBASE

Software Defect Detection Opportunity TreeCompleteness checking

Consistency checking

- Views, interfaces, behavior, pre/post conditions

Traceability checking

Compliance checking

- Models, assertions, standards

Defect Detectionand Removal - Rqts. - Design - Code

Testing

Reviewing

AutomatedAnalysis

Peer reviews, inspections

Architecture Review Boards

Pair programming

Requirements & design

Structural

Operational profile

Usage (alpha, beta)

Regression

Value/Risk - based

Test automation

Page 30: University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC.

01/10/01 ©USC-CSE 30

University of Southern CaliforniaCenter for SoftwareEngineering CeBASE

Orthogonal Defect Classification- Chillarege, 1996

Percent within activity

40

30

20

10

25

40

20 20

30

20

30 30

10

40

20

10

0

10

20

30

40

50

Design Code review Function test System test

Function Assignment Interface Timing

Page 31: University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC.

UMD-USC CeBASE Experience Comparisons- http://www.cebase.org

“Under specified conditions, …”

Technique Selection Guidance (UMD, USC)• Peer reviews are more effective than functional testing for faults of

omission and incorrect specification – Peer reviews catch 60 % of the defects

• Functional testing is more effective than reviews for faults concerning numerical approximations and control flow

Technique Definition Guidance (UMD)• For a reviewer with an average experience level, a procedural

approach to defect detection is more effective than a less procedural one.

• Readers of a software artifact are more effective in uncovering defects when each uses a different and specific focus. – Perspective - based reviews catch 35% more defects

3101/10/01

CeBASE

Page 32: University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC.

01/10/01 ©USC-CSE 32

University of Southern CaliforniaCenter for SoftwareEngineering CeBASE

Factor-of-100 Growth in Software Cost-to-Fix

Phase in which defect was fixed

10

20

50

100

200

500

1000

Rel

ativ

e co

st t

o f

ix d

efec

t

2

1

5

Requirements Design Code Development Acceptance Operation test test

Smaller software projects

Larger software projects

• Median (TRW survey)

80%

20%

SAFEGUARD

GTE

IBM-SSD

Page 33: University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC.

01/10/01 ©USC-CSE 33

University of Southern CaliforniaCenter for SoftwareEngineering CeBASE

Reducing Software Cost-to-Fix: CCPDS-R- Royce, 1998

Architecture first-Integration during the design phase-Demonstration-based evaluation

Configuration baseline change metrics:

RATIONALS o f t w a r e C o r p o r a t I o n

Project Development Schedule15 20 25 30 35 40

40

30

20

10

Design Changes

Implementation Changes

MaintenanceChanges and ECP’s

HoursChange

Risk Management

Page 34: University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC.

01/10/01 ©USC-CSE 34

University of Southern CaliforniaCenter for SoftwareEngineering CeBASE

COQUALMO: Constructive Quality ModelCOCOMO II

COQUALMO

Defect Removal Model

Software Size estimate

Software product, process, computer and personnel attributes

Defect removal capability levels

Software development effort, cost and schedule estimate

Number of residual defects

Defect Introduction

Model

Defect density per unit of size

Page 35: University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC.

01/10/01 ©USC-CSE 35

University of Southern CaliforniaCenter for SoftwareEngineering CeBASE

Defect Impact Reduction Opportunity TreeBusiness case analysis

Pareto (80-20) analysis

V/R-based reviews

V/R-based testing

Cost/schedule/quality

as independent variable

DecreaseDefectImpact,

Size (Loss)

GracefulDegredation

Value/Risk -Based DefectReduction

Fault tolerance

Self-stabilizing SW

Reduced-capability models

Manual Backup

Rapid recovery

Page 36: University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC.

01/10/01 ©USC-CSE 36

University of Southern CaliforniaCenter for SoftwareEngineering CeBASE

Software Dependability Opportunity Tree

DecreaseDefectRiskExposure

Continuous Improvement

DecreaseDefectImpact,Size (Loss)

DecreaseDefectProb (Loss)

Defect Prevention

Defect Detectionand Removal

Value/Risk - BasedDefect Reduction

Graceful Degradation

CI Methods and Metrics

Process, Product, People

Technology

Page 37: University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC.

Project Organization Experience Factory

1. Characterize2. Set Goals3. Choose Process

Execution plans

4. Execute Process

ProjectSupport

5. Analyze

products,lessons learned,models

6. Package

Generalize

Tailor

Formalize

Disseminate

ExperienceBase

environmentcharacteristics

tailorableknowledge,consulting

projectanalysis,process

modification

data,lessonslearned

The Experience Factory Organization- Exemplar: NASA Software Engineering Lab

CeBASE

Page 38: University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC.

01/10/01 ©USC-CSE 38

University of Southern CaliforniaCenter for SoftwareEngineering CeBASE

Goal-Model-Question Metric Approach to TestingGoal Models Questions Metrics

· Acceptproduct · Product requirements · What inputs, outputs verify

requirements compliance?· Percent of requirements

tested

· Cost-EffectiveDefectDetection

· Defect source models- Orthogonal defect

Classification (ODC)- Defect-prone modules

· How best to mix andsequence automatedanalysis, reviewing, andtesting?

· Defect detection yield bydefect class

· Defect detection costs, ROI· Expected vs. actual defect

detection by class

· No knowndelivereddefects

· Defect closure rate models· How many outstanding

defects?· What is their closure status?

· Percent of defect reportsclosed- Expected; actual

· Meetingtargetreliability,delivereddefectdensity(DDD)

· Operational profiles· Reliability estimationmodels

· DDD estimation models

· How well are we progressingtowards targets?

· How long will it take to reachtargets?

· Estimated vs. targetreliability, DDD

· Estimated time to achievetargets

· Minimizeadverseeffects ofdefects

· Business-case or missionmodels of value by feature,quality attribute, deliverytime

- Tradeoff relationships

· What HDC techniques bestaddress high-valueelements?

· How well are projecttechniques minimizingadverse effects?

· Risk exposure reductionvs. time- Business-case based"earned value"

- Value-weighted defectdetection yields foreach technique

Note: No one-size-fits-all solution

Page 39: University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC.

01/10/01 ©USC-CSE 39

University of Southern CaliforniaCenter for SoftwareEngineering CeBASE

GMQM Paradigm: Value/Risk - Driven Testing

• Goal: Minimize adverse effects of defects• Models: Business-case or mission models of value

– By feature, quality attribute, or delivery time– Attribute tradeoff relationships

• Questions: What HDC techniques best address high-value elements?– How well are project techniques minimizing adverse

effects?• Metrics: Risk exposure reduction vs. time

– Business-case based “earned value”– Value-weighted defect detection yields by technique

Page 40: University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC.

01/10/01 ©USC-CSE 40

University of Southern CaliforniaCenter for SoftwareEngineering CeBASE

“Dependability” Attributes and Tradeoffs

• Robustness: reliability, availability, survivability• Protection: security, safety• Quality of Service: accuracy, fidelity, performance

assurance• Integrity: correctness, verifiability

• Attributes mostly compatible and synergetic• Some conflicts and tradeoffs

– Spreading information: survivability vs. security– Fail-safe: safety vs. quality of service– Graceful degredation: survivability vs. quality of service

Page 41: University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC.

01/10/01 ©USC-CSE 41

University of Southern CaliforniaCenter for SoftwareEngineering CeBASE

Resulting Reduction in Risk Exposure

RiskExposure

from Defects = P(L) *S(L)

Acceptance, Defect-density

testing (all defects equal)

Value/Risk-Driven Testing(80-20 value distribution)

Time to Ship (amount of testing)

Page 42: University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC.

01/10/01 ©USC-CSE 42

University of Southern CaliforniaCenter for SoftwareEngineering CeBASE20% of Features Provide 80% of Value: Focus Testing on These (Bullock, 2000)

% of Valuefor

CorrectCustomer

Billing

Customer Type

100

80

60

40

20

5 10 15

Page 43: University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC.

01/10/01 ©USC-CSE 43

University of Southern CaliforniaCenter for SoftwareEngineering CeBASE

Cost, Schedule, Quality: Pick any Two?

C

QS

Page 44: University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC.

01/10/01 ©USC-CSE 44

University of Southern CaliforniaCenter for SoftwareEngineering CeBASE

Cost, Schedule, Quality: Pick any Two?

C

QS

C

QS

• Consider C, S, Q as Independent Variable– Feature Set as Dependent Variable

Page 45: University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC.

01/10/01 ©USC-CSE 45

University of Southern CaliforniaCenter for SoftwareEngineering CeBASE

C, S, Q as Independent Variable• Determine Desired Delivered Defect Density (D4)

– Or a value-based equivalent

• Prioritize desired features– Via QFD, IPT, stakeholder win-win

• Determine Core Capability– 90% confidence of D4 within cost and schedule– Balance parametric models and expert judgment

• Architect for ease of adding next-priority features– Hide sources of change within modules (Parnas)

• Develop core capability to D4 quality level– Usually in less than available cost and schedule

• Add next priority features as resources permit• Versions used successfully on 17 of 19 USC digital library

projects

Page 46: University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC.

01/10/01 ©USC-CSE 46

University of Southern CaliforniaCenter for SoftwareEngineering CeBASE

Attacking the Future: Software Trends“Skate to where the puck is going” --- Wayne Gretzky

• Increased complexity– Everything connected– Opportunities for chaos (agents)– Systems of systems

• Decreased control of content– Infrastructure– COTS components

• Faster change– Time-to-market pressures– Marry in haste; no leisure to repent– Adapt or die (e-commerce)

• Fantastic opportunities– Personal, corporate, national, global

Page 47: University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC.

01/10/01 ©USC-CSE 47

University of Southern CaliforniaCenter for SoftwareEngineering CeBASE

Opportunities for Chaos: User Programming- From NSF SE Workshop: http://sunset.usc.edu/Activities/aug24-25

• Mass of user-programmers a new challenge– 40-50% of user programs have nontrivial defects

– By 2005, up to 55M “sorcerer’s apprentice” user-programmers loosing agents into cyberspace

• Need research on “seat belts, air bags, safe driving guidance” for user-programmers– Frameworks (generic, domain-specific)

– Rules of composition, adaptation, extension

– “Crash barriers;” outcome visualization

Page 48: University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC.

01/10/01 ©USC-CSE 48

University of Southern CaliforniaCenter for SoftwareEngineering CeBASE

• Establish dependability priorities

• Assess COTS dependability risks – Prototype, benchmark, testbed, operational profiles

• Wrap COTS components – Just use the parts you need

– Interface standards and API’s help

• Prepare for inevitable COTS changes– Technology watch, testbeds, synchronized refresh

Dependable COTS-Based systems

Page 49: University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC.

01/10/01 ©USC-CSE 49

University of Southern CaliforniaCenter for SoftwareEngineering CeBASE

HDC Technology Prospects• High-dependability change

– Architecture-based analysis and composition – Lightweight formal methods

• Model-based analysis and assertion checking – Domain models, stochastic models

• High-dependability test data generation• Dependability attribute tradeoff analysis

– Dependable systems form less-dependable components– Exploiting cheap, powerful hardware

• Self-stabilizing software• Complementary empirical methods

– Experiments, testbeds, models, experience factory– Accelerated technology transition, education, training

Page 50: University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC.

01/10/01 ©USC-CSE 50

University of Southern CaliforniaCenter for SoftwareEngineering CeBASE

• Future trends intensify competitive HDC challenges– Complexity, criticality, decreased control, faster change

• Organizations need tailored, mixed HDC strategies– No universal HDC sweet spot– Goal/value/risk analysis useful– Quantitative data and models becoming available

• HDC Opportunity Tree helps sort out mixed strategies

• Quality is better than free for high-value, long-life systems

• Attractive new HDC technology prospects emerging– Architecture- and model-based methods– Lightweight formal methods– Self-stabilizing software– Complementary theory and empirical methods

Conclusions

Page 51: University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC.

01/10/01 ©USC-CSE 51

University of Southern CaliforniaCenter for SoftwareEngineering CeBASE

References V. Basili et al., “SEL’s Software Process Improvement Program,” IEEE Software, November

1995, pp. 83-87.

B. Boehm and V. Basili, “Software Defect Reduction Top 10 List,” IEEE Computer, January 2001

B. Boehm et al., Software Cost Estimation with COCOMO II, Prentice Hall, 2000.

J. Bullock, “Calculating the Value of Testing,” Software Testing and Quality Engineering,

May/June 2000, pp. 56-62

CeBASE (Center for Empirically-Based Software Engineering), http://www.cebase.org

R. Chillarege, “Orthogonal Defect Classification,” in M. Lyu (ed.), Handbook of Software

Reliability Engineering, IEEE-CS Press, 1996, pp. 359-400.

P. Crosby, Quality is Free, Mentor, 1980.

R. Grady, Practical Software Metrics, Prentice Hall, 1992

N. Leveson, Safeware: System Safety and Computers, Addison Wesley, 1995

B. Littlewood et al., “Modeling the Effects of Combining Diverse Fault Detection Techniques,”

IEEE Trans. SW Engr. December 2000, pp. 1157-1167.

M. Lyu (ed), Handbook of Software Reliability Engineering, IEEE-CS Press, 1996

J. Musa and J. Muda, Software Reliability Engineered Testing, McGraw-Hill, 1998

M. Porter, Competitive Strategy, Free Press, 1980.

P. Rook (ed.), Software Reliability Handbook, Elsevier Applied Science, 1990

W. E. Royce, Software Project Management, Addison Wesley, 1998.

Page 52: University of Southern California Center for SoftwareEngineering CeBASE High Dependability Computing in a Competitive World Barry Boehm, USC NASA/CMU HDC.

01/10/01 ©USC-CSE 52

University of Southern CaliforniaCenter for SoftwareEngineering CeBASE

CeBASE Software Defect Reduction Top-10 List- http://www.cebase.org

1. Finding and fixing a software problem after delivery is often 100 times more expensive

than finding and fixing it during the requirements and design phase.

2. About 40-50% of the effort on current software projects is spent on avoidable rework.

3. About 80% of the avoidable rework comes form 20% of the defects.

4. About 80% of the defects come from 20% of the modules and about half the modules

are defect free.

5. About 90% of the downtime comes from at most 10% of the defects.

6. Peer reviews catch 60% of the defects.

7. Perspective-based reviews catch 35% more defects than non-directed reviews.

8. Disciplined personal practices can reduce defect introduction rates by up to 75%.

9. All other things being equal, it costs 50% more per source instruction to develop high-

dependability software products than to develop low-dependability software products.

However, the investment is more than worth it if significant operations and

maintenance costs are involved.

10. About 40-50% of user programs have nontrivial defects.