Models for Competing on Schedule, Cost, and Quality

56
University of Southern California Center for Software Engineering C S E USC Barry Boehm, USC COCOMO/SCM Forum #16 October 24, 2001 ([email protected]) (http://sunset.usc.edu) Competing on Schedule, Cost, and Quality: The Role of Software Models

description

 

Transcript of Models for Competing on Schedule, Cost, and Quality

Page 1: Models for Competing on Schedule, Cost, and Quality

University of Southern CaliforniaCenter for Software Engineering C S E

USC

Barry Boehm, USCCOCOMO/SCM Forum #16

October 24, 2001

([email protected])

(http://sunset.usc.edu)

Competing on Schedule, Cost, and Quality:The Role of Software Models

Page 2: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 2

University of Southern CaliforniaCenter for Software Engineering C S E

USC

Outline

• Traditional and e-Services Development - USC research model perspectives• Software Schedule-Cost-Quality Tradeoffs - Risk exposure - Development and ownership costs•The SAIV/CAIV/SCQAIV Process Models• Conclusions and References

Page 3: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 3

University of Southern CaliforniaCenter for Software Engineering C S E

USC

Traditional and e-Services Development

Traditional Development

• Standalone systems

• Stable requirements

• Rqts. determine capabilities

• Control over evolution

• Enough time to keep stable

• Repeatability-oriented process,

maturity models

e-Services Development

• Everything connected--maybe

• Rapid requirements change

• COTS capabilities determine rqts.• No control over COTS evolution

• Ever-decreasing cycle times• Adaptive process models

Page 4: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 4

University of Southern CaliforniaCenter for Software Engineering C S E

USC

USC: Model Integration Research

MBASE,

CeBASE

Success Models

Product

Models

Property Models

Process

Models

• Win-Win • Business Case Analysis • Results Chains• Risk • Software Warranties • Correctness

• RAD • Six Sigma • Stories• Award Fees • Agility

• JAD • QFD

• Golden Rule

•Waterfall• Spiral • RUP • XP

• SAIV • CAIV • SCQAIV • Risk Management

•Business Process Reengineering•CMM’s • Peopleware

•IPT’s • Agile Development•Groupware • Easy WinWin• Experience Factory • GQM

• UML • XML• CORBA • COM

• Architectures• Product Lines

• OO Analysis & Design• Requirements

• Operational Concepts• Domain Ontologies

• COTS • GOTS

• COCOMO II• COCOTS • CORADMO

• System Dynamics• Metrics • - ilities • COQUALMO

• Simulation and Modeling

Page 5: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 5

University of Southern CaliforniaCenter for Software Engineering C S E

USC

Outline

• Traditional and e-Services Development - USC research model perspectives• Software Schedule-Cost-Quality Tradeoffs - Risk exposure - Development and ownership costs• The SAIV/CAIV/SCQAIV Process Models• Conclusions and References

Page 6: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 6

University of Southern CaliforniaCenter for Software Engineering C S E

USC

Competing on Schedule and Quality- A risk analysis approach

• Risk Exposure RE = Prob (Loss) * Size (Loss)

– “Loss” – financial; reputation; future prospects, …

• For multiple sources of loss:

sources

RE = [Prob (Loss) * Size (Loss)]source

Page 7: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 7

University of Southern CaliforniaCenter for Software Engineering C S E

USC

Example RE Profile: Time to Ship- Loss due to unacceptable dependability

Time to Ship (amount of testing)

RE =P(L) * S(L)

Many defects: high P(L)Critical defects: high S(L)

Few defects: low P(L)Minor defects: low S(L)

Page 8: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 8

University of Southern CaliforniaCenter for Software Engineering C S E

USC

Example RE Profile: Time to Ship- Loss due to unacceptable dependability

- Loss due to market share erosion

Time to Ship (amount of testing)

RE =P(L) * S(L)

Few rivals: low P(L)Weak rivals: low S(L)

Many rivals: high P(L)Strong rivals: high S(L)

Many defects: high P(L)Critical defects: high S(L)

Few defects: low P(L)Minor defects: low S(L)

Page 9: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 9

University of Southern CaliforniaCenter for Software Engineering C S E

USC

Example RE Profile: Time to Ship- Sum of Risk Exposures

Time to Ship (amount of testing)

RE =P(L) * S(L)

Few rivals: low P(L)Weak rivals: low S(L)

Many rivals: high P(L)Strong rivals: high S(L)

SweetSpot

Many defects: high P(L)Critical defects: high S(L)

Few defects: low P(L)Minor defects: low S(L)

Page 10: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 10

University of Southern CaliforniaCenter for Software Engineering C S E

USC

Comparative RE Profile: Safety-Critical System

Time to Ship (amount of testing)

RE =P(L) * S(L)

Mainstream Sweet

Spot

Higher S(L): defects

High-QSweetSpot

Page 11: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 11

University of Southern CaliforniaCenter for Software Engineering C S E

USC

Comparative RE Profile: Internet Startup

Time to Ship (amount of testing)

RE =P(L) * S(L)

Mainstream Sweet

Spot

Higher S(L): delaysLow-TTM

SweetSpot

TTM:Time to Market

Page 12: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 12

University of Southern CaliforniaCenter for Software Engineering C S E

USC

Conclusions So Far• Unwise to try to compete on both cost/schedule

and quality– Some exceptions: major technology or marketplace

edge

• There are no one-size-fits-all cost/schedule/quality strategies

• Risk analysis helps determine how much testing (prototyping, formal verification, etc.) is enough– Buying information to reduce risk

• Often difficult to determine parameter values– Some COCOMO II values discussed next

Page 13: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 13

University of Southern CaliforniaCenter for Software Engineering C S E

USC

0.8 0.9 1.0 1.1 1.2 1.3

Slight inconvenience

Low, easily recoverable

loss

Moderate recoverable

loss

High Financial

Loss

Loss of Human Life

1 month

1 day

2 years

100 years

Defect Risk Rough MTBF(mean time between failures)

In-house support software

1.0

Relative Cost/Source Instruction

Software Development Cost/Quality Tradeoff - COCOMO II calibration to 161 projects

1 hour

High

RELYRating

VeryHigh

Nominal

Low

VeryLow

Page 14: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 14

University of Southern CaliforniaCenter for Software Engineering C S E

USC

0.8 0.9 1.0 1.1 1.2 1.3

Slight inconvenience

Low, easily recoverable

loss

Moderate recoverable

loss

High Financial

Loss

Loss of Human Life

1 month

1 day

2 years

100 years

Defect Risk Rough MTBF(mean time between failures)

Commercial quality leader

1.10

In-house support software

1.0

Commercial cost leader

0.92

Relative Cost/Source Instruction

Software Development Cost/Quality Tradeoff - COCOMO II calibration to 161 projects

1 hour

High

RELYRating

VeryHigh

Nominal

Low

VeryLow

Page 15: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 15

University of Southern CaliforniaCenter for Software Engineering C S E

USC

0.8 0.9 1.0 1.1 1.2 1.3

Slight inconvenience

(1 hour)

Low, easily recoverable

loss

Moderate recoverable

loss

High Financial

Loss

Loss of Human Life

1 month

1 day

2 years

100 years

Defect Risk Rough MTBF(mean time between failures)

Commercial quality leader

1.10

In-house support software

1.0

Commercial cost leader

0.92

0.82

Startup demo

Safety-critical

1.26

Relative Cost/Source Instruction

Software Development Cost/Quality Tradeoff - COCOMO II calibration to 161 projects

High

RELYRating

VeryHigh

Nominal

Low

VeryLow

Page 16: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 16

University of Southern CaliforniaCenter for Software Engineering C S E

USC

COCOMO II RELY Factor Dispersion

0

20

40

60

80

100

VeryLow

Low Nominal High VeryHigh

t = 2.6t > 1.9 significant

Page 17: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 17

University of Southern CaliforniaCenter for Software Engineering C S E

USC

COCOMO II RELY Factor Phenomenology

RELY = Very Low

RELY = Very High

Rqts. and Product Design

Detailed Design

Code and Unit test

Integration and test

Little detailMany TBDsLittle verificationMinimal QA, CM, standards, draft user manual, test plansMinimal PDR

Detailed verification, QA, CM, standards, PDR, documentation IV & V interfaceVery detailed test plans, procedures

Basic design informationMinimal QA, CM, standards, draft user manual, test plansInformal design inspections

Detailed verification, QA, CM, standards, CDR, documentation Very thorough design inspectionsVery detailed test plans, proceduresIV & V interfaceLess rqts. rework

Detailed test procedures, QA, CM, documentation Very thorough code inspectionsVery extensive off- nominal testsIV & V interfaceLess rqts., design rework

No test proceduresMinimal path test, standards checkMinimal QA, CM Minimal I/O and off- nominal tests Minimal user manual

No test proceduresMany requirements untestedMinimal QA, CM Minimal stress and off-nominal tests Minimal as-built documentation

Very detailed test procedures, QA, CM, documentation Very extensive stress and off-nominal tests IV & V interfaceLess rqts., design, code rework

Page 18: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 18

University of Southern CaliforniaCenter for Software Engineering C S E

USC

“Quality is Free”

• Did Philip Crosby’s book get it all wrong?

• Investments in dependable systems

– Cost extra for simple, short-life systems

– Pay off for high-value, long-life systems

Page 19: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 19

University of Southern CaliforniaCenter for Software Engineering C S E

USC

Software Life-Cycle Cost vs. Dependability

0.8

VeryLow

Low Nominal HighVeryHigh

0.9

1.0

1.1

1.2

1.3

1.4

1.10

1.0

0.92

1.26

0.82

Relative Cost to Develop

COCOMO II RELY Rating

Page 20: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 20

University of Southern CaliforniaCenter for Software Engineering C S E

USC

Software Life-Cycle Cost vs. Dependability

0.8

VeryLow

Low Nominal HighVeryHigh

0.9

1.0

1.1

1.2

1.3

1.4

1.10

0.92

1.26

0.82

Relative Cost to Develop, Maintain

COCOMO II RELY Rating

1.23

1.10

0.99

1.07

Page 21: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 21

University of Southern CaliforniaCenter for Software Engineering C S E

USC

Software Life-Cycle Cost vs. Dependability

0.8

VeryLow

Low Nominal HighVeryHigh

0.9

1.0

1.1

1.2

1.3

1.4

1.10

0.92

1.26

0.82

Relative Cost to Develop, Maintain

COCOMO II RELY Rating

1.23

1.10

0.99

1.07

1.11

1.05

70% Maint.

1.07

1.20

• Low-dependability inadvisable for evolving systems

Page 22: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 22

University of Southern CaliforniaCenter for Software Engineering C S E

USC

Software Ownership Cost vs. Dependability

0.8

VeryLow

Low Nominal HighVeryHigh

0.9

1.0

1.1

1.2

1.3

1.4

1.10

0.92

1.26

0.82

Relative Cost to Develop, Maintain,Own andOperate

COCOMO II RELY Rating

1.23

1.10

0.99

1.07

1.11

1.05

70% Maint.

1.07

1.20

0.760.69

VL = 2.55 L = 1.52

Operational-defect cost at Nominal dependability= Software life cycle cost

Operational -defect cost = 0

Page 23: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 23

University of Southern CaliforniaCenter for Software Engineering C S E

USC

Conclusions So Far - 2

• Quality is better than free for high-value, long-life systems

• There is no universal dependability sweet spot– Yours will be determined by your value model– And the relative contributions of dependability

techniques• Risk analysis helps answer “How much is enough?”• COCOMO II provides schedule-cost-quality tradeoff

analysis framework

Page 24: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 24

University of Southern CaliforniaCenter for Software Engineering C S E

USC

Outline• Traditional and e-Services Development - USC research model perspectives• Software Schedule-Cost-Quality Tradeoffs - Risk exposure - Development and ownership costs• The COCOMO Suite of Tradeoff Models - COCOMO II, CORADMO, COQUALMO-ODC• The SAIV/CAIV/SCQAIV Process Models• Conclusions and References

Page 25: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 25

University of Southern CaliforniaCenter for Software Engineering C S E

USC

1. Introduction 2. Model Definition 3. Application Examples 4. Calibration 5. Emerging Extensions 6. Future Trends Appendices

– Assumptions, Data Forms, User’s Manual, CD Content

COCOMO II Book Table of Contents- Boehm, Abts, Brown, Chulani, Clark, Horowitz, Madachy, Reifer, Steece, Software Cost

Estimation with COCOMO II, Prentice Hall, 2000

CD: Video tutorials, USC COCOMO II.2000, commercial tool demos, manuals, data forms, web site links, Affiliate forms

Page 26: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 26

University of Southern CaliforniaCenter for Software Engineering C S E

USC

To help people reason about the

cost and schedule implications of

their software decisions

Purpose of COCOMO II

Page 27: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 27

University of Southern CaliforniaCenter for Software Engineering C S E

USC

Major Decision SituationsHelped by COCOMO II

• Software investment decisions– When to develop, reuse, or purchase– What legacy software to modify or phase out

• Setting project budgets and schedules• Negotiating cost/schedule/performance

tradeoffs• Making software risk management

decisions• Making software improvement decisions

– Reuse, tools, process maturity, outsourcing

Page 28: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 28

University of Southern CaliforniaCenter for Software Engineering C S E

USC

Need to ReEngineer COCOMO 81

• New software processes

• New sizing phenomena

• New reuse phenomena

• Need to make decisions based on incomplete information

Page 29: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 29

University of Southern CaliforniaCenter for Software Engineering C S E

USC

Feasibility

Concept ofOperation

Rqts.Spec.

Plansand

Rqts.

ProductDesign

ProductDesignSpec.

DetailDesignSpec.

DetailDesign

Devel.and Test

AcceptedSoftware

Phases and Milestones

RelativeSize Range x

4x

2x

1.25x

1.5x

0.25x

0.5x ApplicationsComposition

(3 parameters)

Early Design(13 parameters)

Post-Architecture(23 parameters)0.67x

0.8x

COCOMO II Model Stages

Page 30: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 30

University of Southern CaliforniaCenter for Software Engineering C S E

USC

Early Design and Post-Architecture Model

FactorsScaleProcessSizeEffort

sMultiplier

Environment

Environment: Product, Platform, People, Project Factors

Size: Nonlinear reuse and volatility effects

Process: Constraint, Risk/Architecture, Team, Maturity Factors

FactorsScaleProcess EffortSchedule Multiplier

Page 31: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 31

University of Southern CaliforniaCenter for Software Engineering C S E

USC

Relativecost

Amount Modified

1.0

0.75

0.5

0.25

0.25 0.5 0.75 1.0

0.55

0.70

1.0

0.046

Usual LinearAssumption

Data on 2954NASA modules

[Selby, 1988]

Nonlinear Reuse Effects

Page 32: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 32

University of Southern CaliforniaCenter for Software Engineering C S E

USC

COCOMO II. 2000 Productivity Ranges

Productivity Range

1 1.2 1.4 1.6 1.8 2 2.2 2.4

Product Complexity (CPLX)

Analyst Capability (ACAP)

Programmer Capability (PCAP)

Time Constraint (TIME)

Personnel Continuity (PCON)

Required Software Reliability (RELY)

Documentation Match to Life Cycle Needs (DOCU)

Multi-Site Development (SITE)

Applications Experience (AEXP)

Platform Volatility (PVOL)

Use of Software Tools (TOOL)

Storage Constraint (STOR)

Process Maturity (PMAT)

Language and Tools Experience (LTEX)

Required Development Schedule (SCED)

Data Base Size (DATA)

Platform Experience (PEXP)

Architecture and Risk Resolution (RESL)

Precedentedness (PREC)

Develop for Reuse (RUSE)

Team Cohesion (TEAM)

Development Flexibility (FLEX)

Scale Factor Ranges: 10, 100, 1000 KSLOC

Page 33: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 33

University of Southern CaliforniaCenter for Software Engineering C S E

USC

Percentage of sample projects within 30% of actuals

-Without and with calibration to data source

COCOMO II Estimation Accuracy:

COCOMO81 COCOMOII.2000COCOMOII.1997

# Projects 63 83 161

Effort

Schedule

81% 52%64%

75%80%

61%62%

72%81%

65%

Page 34: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 34

University of Southern CaliforniaCenter for Software Engineering C S E

USC

COCOMO II cost drivers

(except SCED)

Language Level,

experience,...

COCOMO II

Phase Distributions(COPSEMO)

RAD Extension

Baseline effort,

schedule

Effort,

schedule by stage

RAD effort, schedule by phase

RVHL

DPRS

CLAB

RESL

COCOMO II RAD Extension (CORADMO)

PPOSRCAP

Page 35: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 35

University of Southern CaliforniaCenter for Software Engineering C S E

USC

0

2

4

6

8

10

12

14

16

0 10 20 30 40 50

PM

M

3.7*(Cube root) 3*(Cube root) Square root

RCAP = XL

RCAP = XH

Effect of RCAP on Cost, Schedule

Page 36: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 36

University of Southern CaliforniaCenter for Software Engineering C S E

USC

COCOMO II

Current COQUALMO System

COQUALMO

DefectIntroduction

Model

DefectRemoval

Model

Software platform, Project, product and personnel attributes

Software Size Estimate

Defect removal profile levelsAutomation, Reviews, Testing

Software development effort, cost and schedule estimate

Number of residual defectsDefect density per unit of size

Page 37: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 37

University of Southern CaliforniaCenter for Software Engineering C S E

USC

Defect Removal Rating Scales

Highly advanced

tools, model-based test

More advance test tools,

preparation.

Dist-monitoring

Well-defined test seq. and

basic test coverage tool

system

Basic test

Test criteria based on checklist

Ad-hoc test and debug

No testingExecution Testing and

Tools

Extensive review

checklist

Statistical control

Root cause analysis,

formal follow

Using historical data

Formal review roles and Well-trained people

and basic checklist

Well-defined preparation,

review, minimal

follow-up

Ad-hoc informal walk-

through

No peer reviewPeer Reviews

Formalized specification, verification.

Advanced dist-

processing

More elaborate req./design

Basic dist-processing

Intermediate-level module

Simple req./design

Compiler extension

Basic req. and design

consistency

Basic compiler capabilities

Simple compiler syntax

checking

Automated Analysis

Extra HighVery HighHighNominalLowVery Low

COCOMO II p.263

Page 38: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 38

University of Southern CaliforniaCenter for Software Engineering C S E

USC

Defect Removal Estimates- Nominal Defect Introduction Rates

60

28.5

14.37.5

3.5 1.60

10

20

30

40

50

60

70

VL Low Nom High VH XH

Delivered Defects/ KSLOC

Composite Defect Removal Rating

Page 39: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 39

University of Southern CaliforniaCenter for Software Engineering C S E

USC

COQUALMO-ODC Model Objectives

• Support cost-schedule-quality tradeoff analysis• Provide reference for defect monitoring and control• Evolve to cover all major classes of project

- With different defect distributions (e.g. COTS-based)- Start simple;grow opportunistically

Page 40: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 40

University of Southern CaliforniaCenter for Software Engineering C S E

USC

Example of Desired Model - I

6 9 12 15

30

60

90

600K

1200K

1800K

Cost @$20K/PM

Effort(PM)

VL

L

N

H

VH

Current:RELYrating

Desired: Defect MTBF

KSLOC (hr)

100

10

1

0.1

0.01

1

24

720

17,000

106

Details for any given point-next

Time (Mo.)

Page 41: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 41

University of Southern CaliforniaCenter for Software Engineering C S E

USC

Example of Desired Model - II

30 KSLOC; RELY = VH; 75PM; 12Mo.; Delivered Defect = 0.3

Phase Rqts. Design Code & Unit test Integration & Test

Effort(PM)

Cost($K)

Schedule(Mo.)

Defects in/out/left

- Rqts.

-Design

-Timing

-Interface

…..

60/50/10 130/116/24 264/234/30 8/37.7/0.3

60/50/10 10/16/4 2/5/1 1/2/0

120/100/20 10/25/5 2/6.9/0.1

12/6/6 2/4/4 1/4.9/0.1

30/25/5 5/9/1 0/1/0

….. ….. …..

12.5 43.5 19

250 870 380

3.1 5.8 3.1

Page 42: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 42

University of Southern CaliforniaCenter for Software Engineering C S E

USC

Current COQUALMO shortfalls

• One-size-fits-all model- May not fit COTS/Web, embedded applications

• Defect uniformity, independence assumptions- Unvalidated hypotheses

• COCOMO II C-S-Q trade offs just to RELY levels- Not to delivered defect density, MTBF

• Need for more calibration data- ODC data could extend and strengthen model

Page 43: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 43

University of Southern CaliforniaCenter for Software Engineering C S E

USC

Outline• Traditional and e-Services Development - USC research model perspectives• Software Schedule-Cost-Quality Tradeoffs - Risk exposure - Development and ownership costs• The COCOMO Suite of Tradeoff Models - COCOMO II, CORADMO, COQUALMO-ODC• The SAIV/CAIV/SCQAIV Process Models• Conclusions and References

Page 44: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 44

University of Southern CaliforniaCenter for Software Engineering C S E

USC

Example of Desired Model - II

30 KSLOC; RELY = VH; 75PM; 12Mo.; Delivered Defect = 0.3

Phase Rqts. Design Code & Unit test Integration & Test

Effort(PM)

Cost($K)

Schedule(Mo.)

Defects in/out/left

- Rqts.

-Design

-Timing

-Interface

…..

60/50/10 130/116/24 264/234/30 8/37.7/0.3

60/50/10 10/16/4 2/5/1 1/2/0

120/100/20 10/25/5 2/6.9/0.1

12/6/6 2/4/4 1/4.9/0.1

30/25/5 5/9/1 0/1/0

….. ….. …..

12.5 43.5 19

250 870 380

3.1 5.8 3.1

Page 45: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 45

University of Southern CaliforniaCenter for Software Engineering C S E

USC

Current COQUALMO shortfalls

• One-size-fits-all model- May not fit COTS/Web, embedded applications

• Defect uniformity, independence assumptions- Unvalidated hypotheses

• COCOMO II C-S-Q trade offs just to RELY levels- Not to delivered defect density, MTBF

• Need for more calibration data- ODC data could extend and strengthen model

Page 46: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 46

University of Southern CaliforniaCenter for Software Engineering C S E

USC

Outline• Traditional and e-Services Development - USC research model perspectives• Software Schedule-Cost-Quality Tradeoffs - Risk exposure - Development and ownership costs• The COCOMO Suite of Tradeoff Models - COCOMO II, CORADMO, COQUALMO-ODC• The SAIV/CAIV/SCQAIV Process Models• Conclusions and References

Page 47: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 47

University of Southern CaliforniaCenter for Software Engineering C S E

USC

The SAIV Process Model

1. Shared vision and expectations management

2. Feature prioritization

3. Schedule range estimation

4. Architecture and core capabilities determination

5. Incremental development

6. Change and progress monitoring and control

Page 48: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 48

University of Southern CaliforniaCenter for Software Engineering C S E

USC

Shared Vision, Expectations Management, and Feature Prioritization

• Use stakeholder win-win approach

• Developer win condition: Don’t overrun fixed 9-month schedule

• Clients’ win conditions: 24 months’ worth of features

• Win-Win negotiation– Which features are most critical?– COCOMO II: How many features can be built

within a 9-month schedule?

Page 49: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 49

University of Southern CaliforniaCenter for Software Engineering C S E

USC

COCOMO II Estimate Ranges

Feasibility

Concept ofOperation

Rqts.Spec.

Plansand

Rqts.

ProductDesign

ProductDesignSpec.

DetailDesignSpec.

DetailDesign

Devel.and Test

AcceptedSoftware

Phases and Milestones

RelativeCost Range x

4x

2x

1.25x

1.5x

0.25x

0.5x ApplicationsComposition

(3 parameters)

Early Design(13 parameters)

Post-Architecture(23 parameters)0.67x

0.8x

90% confidence limits:- Pessimistic

- Optimistic

Page 50: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 50

University of Southern CaliforniaCenter for Software Engineering C S E

USC

Software Product Production Function

Value of software product to organization

Investment High-payoff Diminishing returns

Operatingsystem

Data management

Basic application functions

Main application functions

Humanized I/O

Secondary application functions

Animated graphics

Tertiary Application Functions

Natural speech input

T=12 mo.

T=6 mo.

Availability of delivery by time T

90%

90%

50%

50%

Page 51: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 51

University of Southern CaliforniaCenter for Software Engineering C S E

USC

Core Capability Incremental Development, and Coping with Rapid Change

• Core capability not just top-priority features– Useful end-to-end capability– Architected for ease of adding, dropping marginal

features

• Worst case: Deliver core capability in 9 months, with some extra effort

• Most likely case: Finish core capability in 6-7 months– Add next-priority features

• Cope with change by monitoring progress– Renegotiate plans as appropriate

Page 52: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 52

University of Southern CaliforniaCenter for Software Engineering C S E

USC

SAIV Experience I: USC Digital Library Projects

• Life Cycle Architecture package in fixed 12 weeks

– Compatible operational concept, prototypes, requirements, architecture, plans, feasibility rationale

• Initial Operational Capability in 12 weeks

– Including 2-week cold-turkey transition

• Successful on 24 of 26 projects

– Failure 1: too-ambitious core capability• Cover 3 image repositories at once

– Failure 2: team disbanded• Graduation, summer job pressures

• 92% success rate vs industry 16% in Standish Report

Page 53: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 53

University of Southern CaliforniaCenter for Software Engineering C S E

USC

Rapid ValueTM Project Approach

DefineDefine DesignDesign DevelopDevelop DeployDeploy

Lines of readinessAre we ready for the next step?

IterationScope, Listening, Delivery focus

Iden

tify

Syste

m A

ctors

Docum

ent B

usines

s Pro

cess

es

Gener

ate

Use C

ases

Defin

e bas

ic d

evel

opmen

t stra

tegie

s

Object

Dom

ain M

odelin

g

Detai

led O

bject

Des

ign, L

ogical

Dat

a M

odel

Object

Inte

ract

ions,

Sys

tem

Ser

vice

s

Polish D

esig

n, Build

Pla

n

Build 1

Build 2

Stabili

zatio

n Build

Relea

se to

Tes

t

Beta

Progra

m

Pilot P

rogra

m

Product

ion

• 16-24 week fixed schedule

LA SPIN Copyright © 2001 C-bridge

Page 54: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 54

University of Southern CaliforniaCenter for Software Engineering C S E

USC

Conclusions: SAIV Critical Success Factors• Working with stakeholders in advance to achieve a

shared product vision and realistic expectations;

• Getting clients to develop and maintain prioritized requirements;

• Scoping the core capability to fit within the high-payoff segment of the application’s production function for the given schedule;

• Architecting the system for ease of adding and dropping features;

• Disciplined progress monitoring and corrective action to counter schedule threats

• Also works for Cost as Independent Variable – And “Cost, Schedule, Quality: Pick All Three”

Page 55: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 55

University of Southern CaliforniaCenter for Software Engineering C S E

USC

Conclusions

• Future IT systems require new model perspectives - Product, process, property, success models• USC COCOMO and MBASE model families helpful - Tradeoff and decision analysis - Integrated product and process development - Rapid Application Development - Process management and improvement• USC-IBM COQUALMO-ODC model a valuable next step

Page 56: Models for Competing on Schedule, Cost, and Quality

10/24/01 ©USC-CSE 56

University of Southern CaliforniaCenter for Software Engineering C S E

USC

Further InformationV. Basili, G. Caldeira, and H. Rombach, “The Experience Factory” and “The Goal Question Metric Approach,” in J. Marciniak (ed.), Encyclopedia of Software Engineering, Wiley, 1994.

B. Boehm, C. Abts, A.W. Brown, S. Chulani, B. Clark, E. Horowitz, R. Madachy, D. Reifer, and B. Steece, Software Cost Estimation with COCOMO II, Prentice Hall, 2000.

B. Boehm, D. Port, “Escaping the Software Tar Pit: Model Clashes and How to Avoid Them,” ACM Software Engineering Notes, January, 1999.

B. Boehm et al., “Using the Win Win Spiral Model: A Case Study,” IEEE Computer, July 1998, pp. 33-44.

R. van Solingen and E. Berghout, The Goal/Question/Metric Method, McGraw Hill, 1999.

COCOMO II, MBASE items : http://sunset.usc.edu

CeBASE items : http://www.cebase.org