Barry Boehm, USC STC 2015 Tutorial October 12, 2015 · 2015. 10. 13. · – Nature of an ontology;...
Transcript of Barry Boehm, USC STC 2015 Tutorial October 12, 2015 · 2015. 10. 13. · – Nature of an ontology;...
-
Balancing System and Software Qualities
Barry Boehm, USC
STC 2015 Tutorial
October 12, 2015
Balancing System and Software Qualities
Barry Boehm, USC
STC 2015 Tutorial
October 12, 2015
7-14-2015 1
-
OutlineOutline
• Critical nature of system qualities (SQs)
– Or non-functional requirements (NFRs); ilities
– Major source of project overruns, failures
– Significant source of stakeholder value conflicts
– Poorly defined, understood
– Underemphasized in project management
• Foundations: An initial SQs ontology
– Nature of an ontology; choice of IDEF5 structure
– Stakeholder value-based, means-ends hierarchy
– Synergies and Conflicts matrix and expansions
– Related SERC models, methods, processes, tools (MMPTs)
• Evidence-based processes for balancing SQs7-14-2015 2
-
Importance of SQ TradeoffsMajor source of DoD, other system overruns
Importance of SQ TradeoffsMajor source of DoD, other system overruns
• SQs have systemwide impact
– System elements generally just have local impact
• SQs often exhibit asymptotic behavior
– Watch out for the knee of the curve
• Best architecture is a discontinuous function of SQ level
– “Build it quickly, tune or fix it later” highly risky
– Large system example below
7-14-2015
$100M
$50M
Required Architecture:Custom; many cache processors
Original Architecture:ModifiedClient-Server
1 2 3 4 5Response Time (sec)
Original Spec After Prototyping
Original Cost
3
-
Types of Decision ReviewsTypes of Decision Reviews
• Schedule-based commitment reviews (plan-driven)
– We’ll release the RFP on April 1 based on the schedule in the plan
– $70M overrun to produce overperforming system
• Event-based commitment reviews (artifact-driven)
– The design will be done in 15 months, so we’ll have the review then
– Responsive design found to be unaffordable 15 months later
• Evidence-based commitment reviews (risk-driven)
– Evidence: affordable COTS-based system can’t satisfy 1-second
requirement
• Custom solution roughly 3x more expensive
– Need to reconsider 1-second requirement
07/09/2010 ©USC-CSSE 4
-
Evidence-Based Decision ResultEvidence-Based Decision Result
7-14-2015 5
• Attempt to validate 1-second response time• Commercial system benchmarking and architecture
analysis: needs expensive custom solution• Prototype: 4-second response time OK 90% of the time
• Negotiate response time ranges• 2 seconds desirable• 4 seconds acceptable with some 2-second special cases
• Benchmark commercial system add-ons to validate their feasibility
• Present solution and feasibility evidence at evidence-based decision review• Result: Acceptable solution with minimal delay
-
OutlineOutline
• Critical nature of system qualities (SQs)
– Or non-functional requirements (NFRs); ilities
– Major source of project overruns, failures
– Significant source of stakeholder value conflicts
– Poorly defined, understood
– Underemphasized in project management
• Foundations: An initial SQs ontology
– Nature of an ontology; choice of IDEF5 structure
– Stakeholder value-based, means-ends hierarchy
– Synergies and Conflicts matrix and expansions
– Related SERC models, methods, processes, tools (MMPTs)
• Evidence-based processes for balancing SQs7-14-2015 6
-
Example of SQ Value Conflicts: Security IPTExample of SQ Value Conflicts: Security IPT
• Single-agent key distribution; single data copy
– Reliability: single points of failure
• Elaborate multilayer defense
– Performance: 50% overhead; real-time deadline problems
• Elaborate authentication
– Usability: delays, delegation problems; GUI complexity
• Everything at highest level
– Modifiability: overly complex changes, recertification
7-14-2015 7
-
Proliferation of Definitions: ResilienceProliferation of Definitions: Resilience
• Wikipedia Resilience variants: Climate, Ecology, Energy Development,
Engineering and Construction, Network, Organizational, Psychological, Soil
• Ecology and Society Organization Resilience variants: Original-ecological,
Extended-ecological, Walker et al. list, Folke et al. list; Systemic-heuristic,
Operational, Sociological, Ecological-economic, Social-ecological system,
Metaphoric, Sustainabilty-related
• Variants in resilience outcomes
– Returning to original state; Restoring or improving original state;
Maintaining same relationships among state variables; Maintaining
desired services; Maintaining an acceptable level of service; Retaining
essentially the same function, structure, and feedbacks; Absorbing
disturbances; Coping with disturbances; Self-organizing; Learning and
adaptation; Creating lasting value
– Source of serious cross-discipline collaboration problems
7-14-2015 8
-
Example of Current PracticeExample of Current Practice
• “The system shall have a Mean Time Between Failures of 10,000 hours”
• What is a “failure?”
– 10,000 hours on liveness
– But several dropped or garbled messages per hour?
• What is the operational context?
– Base operations? Field operations? Conflict operations?
• Most management practices focused on functions
– Requirements, design reviews; traceability matrices; work breakdown structures; data item descriptions; earned value
management
• What are the effects of or on other SQs?
– Cost, schedule, performance, maintainability?
7-14-2015 9
-
OutlineOutline
• Critical nature of system qualities (SQs)
– Or non-functional requirements (NFRs); ilities
– Major source of project overruns, failures
– Significant source of stakeholder value conflicts
– Poorly defined, understood
– Underemphasized in project management
• Foundations: An initial SQs ontology
– Nature of an ontology; choice of IDEF5 structure
– Stakeholder value-based, means-ends hierarchy
– Synergies and Conflicts matrix and expansions
– Related SERC models, methods, processes, tools (MMPTs)
• Evidence-based processes for balancing SQs7-14-2015 10
-
Need for SQs OntologyNeed for SQs Ontology
• Oversimplified one-size-fits all definitions
– ISO/IEC 25010, Reliability: the degree to which a system , product, or component performs specified functions under
specified conditions for a specified period of time
– OK if specifications are precise, but increasingly “specified conditions” are informal, sunny-day user stories.
• Satisfying just these will pass “ISO/IEC Reliability,” even if system
fails on rainy-day user stories
– Need to reflect that different stakeholders rely on different capabilities (functions, performance, flexibility, etc.) at
different times and in different environments
• Proliferation of definitions, as with Resilience
• Weak understanding of inter-SQ relationships
– Security Synergies and Conflicts with other qualities
7-14-2015 11
-
Nature of an ontology; choice of IDEF5 structureNature of an ontology; choice of IDEF5 structure
• An ontology for a collection of elements is a definition of
what it means to be a member of the collection
• For “system qualities,” this means that an SQ identifies an
aspect of “how well” the system performs
– The ontology also identifies the sources of variability in the value of “how well” the system performs
• Functional requirements specify “what;” NFRs specify “how well”
• After investigating several ontology frameworks, the IDEF5
framework appeared to best address the nature and sources
of variability of system SQs
– Good fit so far
7-14-2015 12
-
Initial SERC SQs OntologyInitial SERC SQs Ontology
• Modified version of IDEF5 ontology framework
– Classes, Subclasses, and Individuals
– Referents, States, Processes, and Relations
• Top classes cover stakeholder value propositions
– Mission Effectiveness, Resource Utilization, Dependability, Changeabiity
• Subclasses identify means for achieving higher-class ends
– Means-ends one-to-many for top classes
– Ideally mutually exclusive and exhaustive, but some exceptions
– Many-to-many for lower-level subclasses
• Referents, States, Processes, Relations cover SQ variation
• Referents: Sources of variation by stakeholder value context:
• States: Internal (beta-test); External (rural, temperate, sunny)
• Processes: Operational scenarios (normal vs. crisis; experts vs. novices)
• Relations: Impact of other SQs (security as above, synergies & conflicts)7-14-2015 13
-
Example: Reliability RevisitedExample: Reliability Revisited
• Reliability is the probability that the system will deliver
stakeholder-satisfactory results for a given time period
(generally an hour), given specified ranges of:
– Stakeholder value propositions: desired and acceptable ranges of liveness, accuracy, response time, speed,
capabilities, etc.
– System internal and external states: integration test, acceptance test, field test, etc.; weather, terrain, DEFCON,
takeoff/flight/landing, etc.
– System internal and external processes: status checking frequency; types of missions supported; workload volume,
interoperability with independently evolving external systems
– Effects via relations with other SQs: synergies improving other SQs; conflicts degrading other SQs
7-14-2015 14
-
Set-Based SQs Definition ConvergenceRPV Surveillance Example
7-14-2015 15
Effective
ness
(pixels/
frame)
Efficiency (E.g., frames/second)
Phase 1
Phase 2
Phase 3
Phase 1. Rough ConOps, Rqts, Solution Understanding
Phase 2. Improved ConOps, Rqts, Solution Understanding
Phase 3. Good ConOps, Rqts, Solution Understanding
Desired
Acceptable
Acceptable
Desired
-
Stakeholder value-based, means-ends hierarchyStakeholder value-based, means-ends hierarchy
• Mission operators and managers want improved Mission Effectiveness
– Involves Physical Capability, Cyber Capability, Human Usability, Speed, Accuracy,
Impact, Maneuverability, Scalability, Versatility, Interoperability
• Mission investors and system owners want Mission Cost-Effectiveness
– Involves Cost, Duration, Personnel, Scarce Quantities (capacity, weight, energy, …);
Manufacturability, Sustainability
• All want system Dependability: cost-effective defect-freedom, availability, and
safety and security for the communities that they serve
– Involves Reliability, Availability, Maintainability, Survivability, Safety, Security,
Robustness
• In an increasingly dynamic world, all want system Changeability: to be rapidly
and cost-effectively evolvable
– Involves Maintainability, Adaptability
7-14-2015 16
-
U. Virginia: Coq Formal Reasoning StructureU. Virginia: Coq Formal Reasoning Structure
• Inductive Dependable (s: System): Prop :=
• mk_dependability: Security s -> Safety s -> Reliability s ->
• Maintainability s -> Availability s -> Survivability s ->
• Robustness s -> Dependable s.
• Example aSystemisDependable: Dependable aSystem.
• apply mk_dependability.
• exact (is_secure aSystem).
• exact (is_safe aSystem).
• exact (is_reliable aSystem).
• exact (is_maintainable aSystem).
• exact (is_avaliable aSystem).
• exact (is_survivable aSystem).
• exact (is_robust aSystem).
• Qed.
7-14-2015 17
-
Maintainability Challenges and ResponsesMaintainability Challenges and Responses
• Maintainability supports both Dependability and Changeability
– Distinguish between Repairability and Modifiability
– Elaborate each via means-ends relationships
• Multiple definitions of Changeability
– Distinguish between Product Quality and Quality in Use (ISO/IEC 25010)
– Provide mapping between Product Quality and Quality in Use viewpoints
• Changeability can be both external and internal
– Distinguish between Maintainability and Adaptability
• Many definitions of Resilience
– Define Resilience as a combination of Dependability and Changeability
• Variability of Dependability and Changeability values
– Ontology addresses sources of variation
– Referents (stakeholder values), States, Processes, Relations with other SQs
• Need to help stakeholders choose options, avoid pitfalls
– Qualipedia, Synergies and Conflicts, Opportunity Trees
-
Product Quality View of ChangeabilityMIT Quality in Use View also valuable
Product Quality View of ChangeabilityMIT Quality in Use View also valuable
• Changeability (PQ): Ability to become different product– Swiss Army Knife, Brick Useful but not Changeable
• Changeability (Q in Use): Ability to accommodate changes in use– Swiss Army Knife does not change as a product but is Changeable
– Brick is Changeable in Use (Can help build a wall or break a window?)
Changeability
Adaptability Maintainability
• Self-Diagnosability
• Self-Modifiability
• Self-Testability
Internal
External
Modifiability Repairability
Defects
Changes
Testability
Means to End
Subclass of
• Understandability• Modularity• Scalability• Portability
• Diagnosability• Accessibility• Restorability
-
A
B
A
B C,D
Means to End Subclass of
Dependability, Availability
Reliability Maintainability
Defect FreedomSurvivability
Fault Tolerance
Repairability
Test Plans, CoverageTest Scenarios, Data
Test Drivers, OraclesTest Software Qualitities
Testability
Complete Partial
RobustnessSelf-Repairability
Graceful Degradation
Choices of Security,
Safety…
Modifiability
Dependability, Changeability, and Resilience
Testability, Diagnosability, etc.
06/01/2015 ©USC-CSSE
Changeability
Resilience
-
Referents: MIT 14-D Semantic Basis
7-14-2015 21
-
OutlineOutline
• Critical nature of system qualities (SQs)
– Or non-functional requirements; ilities
– Major source of project overruns, failures
– Significant source of stakeholder value conflicts
– Poorly defined, understood
– Underemphasized in project management
• Need for and nature of SQs ontology
– Nature of an ontology; choice of IDEF5 structure
– Stakeholder value-based, means-ends hierarchy
– Synergies and Conflicts matrix and expansions
• Example means-ends hierarchy: Affordability
7-14-2015 22
-
7x7 Synergies and Conflicts Matrix7x7 Synergies and Conflicts Matrix
• Mission Effectiveness expanded to 4 elements
– Physical Capability, Cyber Capability, Interoperability, Other Mission Effectiveness (including Usability as Human Capability)
• Synergies and Conflicts among the 7 resulting elements
identified in 7x7 matrix
– Synergies above main diagonal, Conflicts below
• Work-in-progress tool will enable clicking on an entry and
obtaining details about the synergy or conflict
– Ideally quantitative; some examples next
• Still need synergies and conflicts within elements
– Example 3x3 Dependability subset provided
7-14-2015 23
-
7-14-2015 24
-
Software Development Cost vs. Reliability
0.8
VeryLow
Low Nominal HighVeryHigh
0.9
1.0
1.1
1.2
1.3
1.4
1.10
1.0
0.92
1.26
0.82
Relative Cost to Develop
COCOMO II RELY Rating
MTBF (hours) 1 10 300 10,000 300,000
7-14-2015 25
-
Software Ownership Cost vs. Reliability
0.8
VeryLow
Low Nominal HighVeryHigh
0.9
1.0
1.1
1.2
1.3
1.4
1.10
0.92
1.26
0.82
Relative Cost to Develop, Maintain,Own andOperate
COCOMO II RELY Rating
1.23
1.10
0.99
1.07
1.11
1.05
70% Maint.
1.07
1.20
0.76
0.69
VL = 2.55L = 1.52
Operational-defect cost at Nominal dependability= Software life cycle cost
Operational -defect cost = 0
MTBF (hours) 1 10 300 10,000 300,000
7-14-2015 26
-
7-14-2015
Legacy System Repurposing
Eliminate Tasks
Eliminate Scrap, Rework
Staffing, Incentivizing, Teambuilding
Kaizen (continuous improvement)
Work and Oversight Streamlining
Collaboration Technology
Early Risk and Defect Elimination
Modularity Around Sources of Change
Incremental, Evolutionary Development
Risk-Based Prototyping
Satisficing vs. Optimizing Performance
Value-Based Capability Prioritization
Composable Components,Services, COTS
Cost Improvements and Tradeoffs
Get the Best from People
Make Tasks More Efficient
Simplify Products (KISS)
Reuse Components
Facilities, Support Services
Tools and Automation
Lean and Agile Methods
Evidence-Based Decision Gates
Domain Engineering and Architecture
Task Automation
Model-Based Product Generation
Value-Based, Agile Process Maturity
Affordability and Tradespace Framework
Reduce Operations, Support Costs
Streamline Supply ChainDesign for Maintainability, EvolvabilityAutomate Operations Elements
Anticipate, Prepare for ChangeValue- and Architecture-Based Tradeoffs and Balancing
27
-
Costing Insights: COCOMO II Productivity Ranges
Productivity Range
1 1.2 1.4 1.6 1.8 2 2.2 2.4
Product Complexity (CPLX)
Analyst Capability (ACAP)
Programmer Capability (PCAP)
Time Constraint (TIME)
Personnel Continuity (PCON)
Required Software Reliability (RELY)
Documentation Match to Life Cycle Needs (DOCU)
Multi-Site Development (SITE)
Applications Experience (AEXP)
Platform Volatility (PVOL)
Use of Software Tools (TOOL)
Storage Constraint (STOR)
Process Maturity (PMAT)
Language and Tools Experience (LTEX)
Required Development Schedule (SCED)
Data Base Size (DATA)
Platform Experience (PEXP)
Architecture and Risk Resolution (RESL)
Precedentedness (PREC)
Develop for Reuse (RUSE)
Team Cohesion (TEAM)
Development Flexibility (FLEX)
Scale Factor Ranges: 10, 100, 1000 KSLOC
7-14-2015
Staffing
Teambuilding
Continuous Improvement
28
-
COSYSMO Sys Engr Cost DriversCOSYSMO Sys Engr Cost Drivers
7-14-2015
Teambuilding
Staffing
Continuous Improvement
29
-
7-14-2015
Legacy System Repurposing
Eliminate Tasks
Eliminate Scrap, Rework
Staffing, Incentivizing, Teambuilding
Kaizen (continuous improvement)
Work and Oversight Streamlining
Collaboration Technology
Early Risk and Defect Elimination
Modularity Around Sources of Change
Incremental, Evolutionary Development
Risk-Based Prototyping
Satisficing vs. Optimizing Performance
Value-Based Capability Prioritization
Composable Components,Services, COTS
Affordability Improvements and Tradeoffs
Get the Best from People
Make Tasks More Efficient
Simplify Products (KISS)
Reuse Components
Facilities, Support Services
Tools and Automation
Lean and Agile Methods
Evidence-Based Decision Gates
Domain Engineering and Architecture
Task Automation
Model-Based Product Generation
Value-Based, Agile Process Maturity
Tradespace and Affordability Framework
Reduce Operations, Support Costs
Streamline Supply ChainDesign for Maintainability, EvolvabilityAutomate Operations Elements
Anticipate, Prepare for ChangeValue- and Architecture-Based Tradeoffs and Balancing
30
-
7-14-2015
COCOMO II-Based Tradeoff AnalysisBetter, Cheaper, Faster: Pick Any Two?
0
1
2
3
4
5
6
7
8
9
0 10 20 30 40 50
Development Time (Months)
Co
st
($M
)
(VL, 1)
(L, 10)
(N, 300)
(H, 10K)
(VH, 300K)
-- Cost/Schedule/RELY:
“pick any two” points
(RELY, MTBF (hours))
•For 100-KSLOC set of features
•Can “pick all three” with 77-KSLOC set of features
31
-
7-14-2015
Value-Based Testing: Empirical Data and ROI— LiGuo Huang, ISESE 2005
Value-Based Testing: Empirical Data and ROI— LiGuo Huang, ISESE 2005
-1.5
-1
-0.5
0
0.5
1
1.5
2
0 10 20 30 40 50 60 70 80 90 100
% Tests Run
Retu
rn O
n Investm
ent (R
OI)
Value-Neutral ATG Testing Value-Based Pareto Testing
% of Valuefor
CorrectCustomer
Billing
Customer Type
100
80
60
40
20
5 10 15
Automated test generation (ATG) tool
- all tests have equal value
Bullock data– Pareto distribution% of
Valuefor
CorrectCustomer
Billing
Customer Type
100
80
60
40
20
5 10 15
Automated test generation (ATG) tool
- all tests have equal value
% of Valuefor
CorrectCustomer
Billing
Customer Type
100
80
60
40
20
5 10 15
Automated test generation (ATG) tool
- all tests have equal value
Bullock data– Pareto distribution
(a)
(b)
32
-
Value-Neutral Defect Fixing Is Even WorseValue-Neutral Defect Fixing Is Even Worse
% of Valuefor CorrectCustomerBilling
Customer Type
100
80
60
40
20
5 10 15
Automated test generation tool
- all tests have equal value
Value-neutral defect fixing:Quickly reduce # of defects
Pareto 80-20 Business Value
7-14-2015 33
-
OutlineOutline
• Foundations: An initial SQs ontology
– Nature of an ontology; choice of IDEF5 structure
– Stakeholder value-based, means-ends hierarchy
• Critical nature of system qualities (SQs)
– Or non-functional requirements (NFRs); ilities
– Major source of project overruns, failures
– Significant source of stakeholder value conflicts
– Poorly defined, understood
– Underemphasized in project management
– Synergies and Conflicts matrix and expansions
– Related SERC models, methods, processes, tools (MMPTs)
• Evidence-based processes for balancing SQs7-14-2015 34
-
Context: SERC SQ Initiative ElementsContext: SERC SQ Initiative Elements
• SQ Tradespace and Affordability Project foundations
– More precise SQ definitions and relationships
– Stakeholder value-based, means-ends relationships
– SQ strategy effects, synergies, conflicts, Opportunity Trees
– USC, MIT, U. Virginia
• Next-generation system cost-schedule estimation models
– Software: COCOMO III
– Systems Engineering: COSYSMO 3.0
– USC, AFIT, GaTech, NPS
• Applied SQ methods, processes, and tools (MPTs)
– For concurrent cyber-physical-human systems
– Experimental MPT piloting, evolution, improvement
– Wayne State, AFIT, GaTech, MIT, NPS, Penn State, USC 7-14-2015 35
-
SQ Opportunity TreesSQ Opportunity Trees
• Means-Ends relationships for major SQs
– Previously developed for cost and schedule reduction
– Revising an early version for Dependability
– Starting on Maintainability
• Accommodates SQ Synergies and Conflicts matrix
– Conflicts via “Address Potential Conflicts” entry
– Means to Ends constitute Synergies
• More promising organizational structure than matrix
08/04/2015 36
-
08/04/201537
Schedule Reduction Opportunity Tree
Eliminating Tasks
Reducing Time Per Task
Reducing Risks of Single-Point Failures
Reducing Backtracking
Activity Network Streamlining
Increasing Effective Workweek
Better People and Incentives
Transition to Learning Organization
Business process reengineering Reusing assets Applications generation Schedule as independent variable
Tools and automation Work streamlining (80-20)
Increasing parallelism
Reducing failures Reducing their effects
Early error elimination Process anchor points Improving process maturity Collaboration technology
Minimizing task dependencies Avoiding high fan-in, fan-out Reducing task variance Removing tasks from critical path
24x7 development Nightly builds, testing Weekend warriors
-
08/04/201538
Reuse at HP’’’’s Queensferry Telecommunication Division
0
10
20
30
40
50
60
70
86 87 88 89 90 91 92
Year
Timeto
Market
(months)
Non-reuse ProjectReuse project
-
08/04/201539
Software Dependability Opportunity Tree
DecreaseDefectRiskExposure
ContinuousImprovement
DecreaseDefectImpact,Size (Loss)
DecreaseDefectProb (Loss)
Defect Prevention
Defect Detectionand Removal
Value/Risk - BasedDefect Reduction
Graceful Degradation
CI Methods and Metrics
Process, Product, People
Technology
-
08/04/2015 40
Software Defect Prevention Opportunity TreeSoftware Defect Prevention Opportunity Tree
IPT, JAD, WinWin,…
PSP, Cleanroom, Pair development,…
Manual execution, scenarios,...
Staffing for dependability
Rqts., Design, Code,…
Interfaces, traceability,…
Checklists
DefectPrevention
Peoplepractices
Standards
Languages
Prototyping
Modeling & Simulation
Reuse
Root cause analysis
-
08/04/2015 41
Defect Detection Opportunity TreeDefect Detection Opportunity Tree
Completeness checking
Consistency checking
- Views, interfaces, behavior, pre/post conditions
Traceability checking
Compliance checking
- Models, assertions, standards
Defect Detection
and Repair- Rqts.- Design- Development
Testing
Reviewing
AutomatedAnalysis
Peer reviews, inspections
Architecture Review Boards
Pair development
Requirements & design
Structural
Operational profile
Usage (alpha, beta)
Regression
Value/Risk - based
Test automation
-
08/04/2015 42
Defect Impact Reduction Opportunity TreeDefect Impact Reduction Opportunity Tree
Business case analysis
Pareto (80-20) analysis
V/R-based reviews
V/R-based testing
Cost/schedule/quality
as independent variable
DecreaseDefectImpact,
Size (Loss)
GracefulDegredation
Value/Risk -Based DefectReduction
Fault tolerance
Self-stabilizing SW
Reduced-capability models
Manual Backup
Rapid recovery
-
08/04/201543
Repairability Opportunity Tree
Anticipate Repairability Needs
Design/Develop for Repairability
Improve Repair Diagnosis
Repair facility requirementsRepair-type trend analysisRepair skills analysisRepair personnel involvement
Replacement parts logistics development
Replacement accessibility deaign
Multimedia diagnosis guides
Fault localization
Address Potential Conflicts
Repair compatibility analysis
Regression test capabilities
Address Potential Conflicts
Address Potential Conflicts
Improve Repair, V&V Processes
Smart system anomaly, trend analysis
Informative error messages
Replacement parts trend analysis
Repair facility design, development
Address Potential Conflicts
Switchable spare components
Prioritize, Schedule Repairs, V&V
-
08/04/201544
Modifiability Opportunity Tree
Anticipate Modifiability Needs
Design/Develop for Modifiability
Improve Modification, V&V
Evolution requirementsTrend analysisHotspot (change source) analysisModifier involvement
Spare capacity
Domain-specific architecture in domain
Regression test capabilities
Address Potential Conflicts
Address Potential Conflicts
Address Potential Conflicts
Service-orientation; loose coupling
Modularize around hotspots
Enable user programmability
Modification compatibility analysis
Prioritize, Schedule Modifications, V&V
-
Hotspot Analysis: Projects A and B
- Change processing over 1 person-month = 152 person-hoursHotspot Analysis: Projects A and B
- Change processing over 1 person-month = 152 person-hours
08/04/2015 45
Category Project A Project B
Extra long messages 3404+626+443+328+244= 5045
Network failover 2050+470+360+160= 3040
Hardware-software interface 620+200= 820 1629+513+289+232+166= 2832
Encryption algorithms 1247+368= 1615
Subcontractor interface 1100+760+200= 2060
GUI revision 980+730+420+240+180 =2550
Data compression algorithm 910
External applications interface 770+330+200+160= 1460
COTS upgrades 540+380+190= 1110 741+302+221+197= 1461
Database restructure 690+480+310+210+170= 1860
Routing algorithms 494+198= 692
Diagnostic aids 360 477+318+184= 979
TOTAL: 13620 13531
-
OutlineOutline
• Foundations: An initial SQs ontology
– Nature of an ontology; choice of IDEF5 structure
– Stakeholder value-based, means-ends hierarchy
• Critical nature of system qualities (SQs)
– Or non-functional requirements (NFRs); ilities
– Major source of project overruns, failures
– Significant source of stakeholder value conflicts
– Poorly defined, understood
– Underemphasized in project management
– Synergies and Conflicts matrix and expansions
– Related SERC models, methods, processes, tools (MMPTs)
• Evidence-based processes for balancing SQs7-14-2015 46
-
Product Line Engineering and Management: NPS
Product Line Engineering and Management: NPS
7-14-201547
-
Cost-Schedule Tradespace AnalysisCost-Schedule Tradespace Analysis
• Generally, reducing schedule adds cost
– Pair programming: 60% schedule * 2 people = 120% cost
• Increasing schedule may or may not add cost
– Pre-planned smaller team: less communications overhead
– Mid-course stretchout: pay longer for tech, admin overhead
• Can often decrease both cost and schedule
– Lean, agile, value-based methods; product-line reuse
• Can optimize on schedule via concurrent vs. sequential processes
– Sequential; cost-optimized: Schedule = 3 * cube root (effort)
• 27 person-months: Schedule = 3*3=9 months; 3 personnel
– Concurrent, schedule-optimized: Schedule = square root (effort)
• 27 person-months: Schedule = 5.5 months; 5.4 personnel
• Can also accelerate agile square root schedule
– SERC Expediting SysE study: product, process, people, project, risk7-14-2015 48
-
MIT: SQ Tradespace ExplorationBased on SEAri research
Enabling Construct: Tradespace Networks Changeability
Survivability
Value Robustness
Enabling Construct: Epochs and Eras
Set of Metrics
7-14-2015 49
-
GaTech – FACT Tradespace ToolBeing used by Marine Corps; Army, Navy extensions
GaTech – FACT Tradespace ToolBeing used by Marine Corps; Army, Navy extensions
Configure vehicles
from the “bottom up”
Quickly assess
impacts on
performance
7-14-2015 50
-
SysML Building Blocks for Cost ModelingGaTech-USC Prototype
SysML Building Blocks for Cost ModelingGaTech-USC Prototype
• Implemented reusable SysML building blocks
– Based on SoS/COSYSMO SE cost (effort)
modeling work by Lane, Valerdi, Boehm, et al.
• Successfully applied building blocks to healthcare SoS case study from [Lane 2009]
• Provides key step towards affordability trade studies
involving diverse “-ilities”
7-14-2015 51
-
Healthcare SoS Case Study [Lane 2009] Shown
Using GaTech SysML Building Blocks [Peak 2014]
7-14-2015 52
-
SERC Expediting SysE study: Product, process, people, project; risk factors
Final Database
Over 30 Interviews with Gov’t/ Industry Rapid Development
Organizations
Over 23,500 words from interview notes
Product, Process, People … all in a Project Context
7-14-2015 53
-
Accelerators/Ratings Very Low Low Nominal High Very High Extra High
Product Factors 1.09 1.05 1.0 0.96 0.92 0.87
Simplicity Extremely
complex
Highly
complex Mod. complex
Moderately
simple Highly simple
Extremely
simple
Element Reuse None (0%) Minimal (15%) Some (30%) Moderate
(50%)
Considerate
(70%)
Extensive
(90%)
Low-Priority
Deferrals Never Rarely Sometimes Often Usually Anytime
Models vs Documents None (0%) Minimal (15%) Some (30%) Moderate
(50%)
Considerate
(70%)
Extensive
(90%)
Key Technology
Maturity
>0 TRL 1,2 or
>1 TRL 3
1 TRL 3 or > 1
TRL 4
1 TRL 4 or > 2
TRL 5
1-2 TRL 5 or
>2 TRL 6 1-2 TRL 6 All > TRL 7
Process Factors 1.09 1.05 1.0 0.96 0.92 0.87
Concurrent
Operational Concept,
Requirements,
Architecture, V&V
Highly
sequential
Mostly
sequential
2 artifacts
mostly
concurrent
3 artifacts
mostly
concurrent
All artifacts
mostly
concurrent
Fully
concurrent
Process Streamlining Heavily
bureaucratic
Largely
bureaucratic
Conservative
bureaucratic
Moderate
streamline
Mostly
streamlined
Fully
streamlined
General SE tool
support CIM
(Coverage,
Integration, Maturity)
Simple tools,
weak
integration
Minimal CIM Some CIM Moderate CIM Considerable
CIM Extensive CIM
Project Factors 1.08 1.04 1.0 0.96 0.93 0.9
Project size (peak # of
personnel) Over 300 Over 100 Over 30 Over 10 Over 3 ≤ 3
Collaboration support
Globally
distributed
weak comm. ,
data sharing
Nationally
distributed,
some sharing
Regionally
distributed,
moderate
sharing
Metro-area
distributed,
good sharing
Simple
campus,
strong sharing
Largely
collocated,
Very strong
sharing
Single-domain
MMPTs (Models, Methods, Processes,
Tools)
Simple
MMPTs, weak
integration
Minimal CIM Some CIM Moderate CIM Considerable
CIM Extensive CIM
Multi-domain
MMPTs
Simple; weak
integration Minimal CIM
Some CIM or
not needed Moderate CIM
Considerable
CIM Extensive CIM
People Factors 1.13 1.06 1.0 0.94 0.89 0.84
General SE KSAs
(Knowledge, Skills,
Agility)
Weak KSAs Some KSAs Moderate
KSAs Good KSAs Strong KSAs
Very strong
KSAs
Single-Domain KSAs Weak Some Moderate Good Strong Very strong
Multi-Domain KSAs Weak Some Moderate or
not needed Good Strong Very strong
Team Compatibility Very difficult
interactions
Some difficult
interactions
Basically
cooperative
interactions
Largely
cooperative
Highly
cooperative
Seamless
interactions
Risk Acceptance Factor 1.13 1.06 1.0 0.94 0.89 0.84
Highly risk-
averse
Partly risk-
averse
Balanced risk
aversion,
acceptance
Moderately
risk-accepting
Considerably
risk-accepting
Strongly risk-
accepting
CORADMO-SE Rating Scales, Schedule Multipliers
-
Application Type Technologies Person
Months
Duration
(Months)
Duration
/ √PM Product Process Project People Risk
Multi-
plier
Error
%
Insurance agency system HTML/VB 34.94 3.82 0.65 VH VH XH VH N 0.68 5%
Scientific/engineering C++ 18.66 3.72 0.86 L VH VH VH N 0.80 -7%
Compliance - expert HTML/VB 17.89 3.36 0.79 VH VH XH VH N 0.68 -15%
Barter exchange SQL/VB/ HTML 112.58 9.54 0.90 VH H H VH N 0.75 -16%
Options exchange site HTML/SQL 13.94 2.67 0.72 VH VH XH VH N 0.68 -5%
Commercial HMI C++ 205.27 13.81 0.96 L N N VH N 0.93 -3%
Options exchange site HTML 42.41 4.48 0.69 VH VH XH VH N 0.68 -1%
Time and billing C++/VB 26.87 4.80 0.93 L VH VH VH N 0.80 -14%
Hybrid Web/client-server VB/HTML 70.93 8.62 1.02 L N VH VH N 0.87 -15%
ASP HTML/VB/SQL 9.79 1.39 0.44 VH VH XH VH N 0.68 53%
On-line billing/tracking VB/HTML 17.20 2.70 0.65 VH VH XH VH N 0.68 4%
Palm email client C/HTML 4.53 1.45 0.68 N VH VH VH N 0.76 12%
CORADMO-SE Calibration DataMostly Commercial; Some DoD
7-14-2015 55
-
Accelerators/Ratings VL L N H VH XH
Product Factors 1.09 1.05 1.0 0.96 0.92 0.87
Simplicity X
Element Reuse X
Low-Priority Deferrals X
Models vs Documents X
Key Technology
Maturity X
Process Factors 1.09 1.05 1.0 0.96 0.92 0.87
Concurrent Operational
Concept, Requirements,
Architecture, V&V
X
Process Streamlining X
General SE tool support
CIM (Coverage,
Integration, Maturity)
X
Project Factors 1.08 1.04 1.0 0.96 0.93 0.9
Project size (peak # of
personnel) X
Collaboration support X
Single-domain MMPTs
(Models, Methods,
Processes, Tools)
X
Multi-domain MMPTs X
People Factors 1.13 1.06 1.0 0.94 0.89 0.84
General SE KSAs
(Knowledge, Skills,
Agility)
X
Single-Domain KSAs X
Multi-Domain KSAs X
Team Compatibility X
Risk Acceptance Factor 1.13 1.06 1.0 0.94 0.89 0.84
X
7-14-2015
Case Study: From Plan-Driven to AgileInitial Project: Focus on Concurrent SE
Expected schedule reduction of 1.09/0.96 = 0.88 (green arrow)
Actual schedule delay of 15% due to side effects (red arrows)
Model prediction: 0.88*1.09*1.04*1.06*1.06 = 1.1356
-
Accelerators/Ratings VL L N H VH XH
Product Factors 1.09 1.05 1.0 0.96 0.92 0.87
Simplicity X
Element Reuse X
Low-Priority Deferrals X
Models vs Documents X
Key Technology
Maturity X
Process Factors 1.09 1.05 1.0 0.96 0.92 0.87
Concurrent Operational
Concept, Requirements,
Architecture, V&V
X
Process Streamlining X
General SE tool support
CIM (Coverage,
Integration, Maturity)
X
Project Factors 1.08 1.04 1.0 0.96 0.93 0.9
Project size (peak # of
personnel) X
Collaboration support X
Single-domain MMPTs
(Models, Methods,
Processes, Tools)
X
Multi-domain MMPTs X
People Factors 1.13 1.06 1.0 0.94 0.89 0.84
General SE KSAs
(Knowledge, Skills,
Agility)
X
Single-Domain KSAs X
Multi-Domain KSAs X
Team Compatibility X
Risk Acceptance Factor 1.13 1.06 1.0 0.94 0.89 0.84
X
7-14-2015
Case Study: From Plan-Driven to AgileNext Project: Fix Side Effects; Reduce Bureaucracy
Model estimate: 0.88*(0.92/0.96)*(0.96/1.05) = 0.77 speedup
Project results: 0.8 speedup
Model tracks project status; identifies further speedup potential57
-
OutlineOutline
• Critical nature of system qualities (SQs)
– Or non-functional requirements (NFRs); ilities
– Major source of project overruns, failures
– Significant source of stakeholder value conflicts
– Poorly defined, understood
– Underemphasized in project management
• Foundations: An initial SQs ontology
– Nature of an ontology; choice of IDEF5 structure
– Stakeholder value-based, means-ends hierarchy
– Synergies and Conflicts matrix and expansions
– Related SERC models, methods, processes, tools (MMPTs)
• Evidence-based processes for balancing SQs7-14-2015 58
-
Evidence-based processes for balancing SQsEvidence-based processes for balancing SQs
• Context: The Incremental Commitment Spiral Model
– Viewpoint diagrams
– Principles Trump Diagrams
– Principle 4: Evidence and Risk-Based Decisions
• Evidence-based reviews enable early risk resolution
– They require more up-front systems engineering effort
– They have a high ROI for high-risk projects
– They synchronize and stabilize concurrent engineering
– The evidence becomes a first-class deliverable
• It requires planning and earned value management
• They can be added to traditional review processes
7-14-2015 59
-
5/26/2014
What is the ICSM?What is the ICSM?
• Risk-driven framework for determining and evolving best-fit system life-cycle process
• Integrates the strengths of phased and risk-driven spiral process models
• Synthesizes together principles critical to successful system development
– Stakeholder value-based guidance
– Incremental commitment and accountability
– Concurrent multidiscipline engineering
– Evidence and risk-driven decisions
Principles
trump
diagrams…
Principles used by 60-80% of CrossTalk Top-5 projects, 2002-2005
60© USC-CSSE
-
The Incremental Commitment Spiral ModelThe Incremental Commitment Spiral Model
5/26/2014 61
1
2
3
4
5
6
RISK-BASEDSTAKEHOLDER COMMITMENT REVIEW POINTS:
Opportunities to proceed, skip phases backtrack, or terminate
Exploration Commitment Review
Valuation Commitment Review
Foundations Commitment Review
Development Commitment Review
Operations1 and Development2Commitment Review
Operations2 and Development3Commitment Review
Cumulative Level of Understanding, Product and Process Detail (Risk-Driven)
Concurrent Engineering of Products and Processes
2345
EXPLORATION
VALUATION
FOUNDATIONS
DEVELOPMENT1FOUNDATIONS2
OPERATION2DEVELOPMENT3
FOUNDATIONS4
16
Evidence-Based Review Content- A first-class deliverable- Independent expert review- Shortfalls are uncertainties and risks
OPERATION1DEVELOPMENT2
FOUNDATIONS3
Risk
Risk-Based Decisions
Acceptable
Negligible
High, but
Addressable
Too High,
Unaddressable
© USC-CSSE
-
5/26/2014
ICSM Nature and Origins• Integrates hardware, software, and human factors elements of
systems life cycle
– Concurrent exploration of needs and opportunities
– Concurrent engineering of hardware, software, human aspects
– Concurrency stabilized via anchor point milestones
• Developed in response to a variety of issues
– Clarify ““““spiral development”””” usage
• Initial phased version (2005)
– Provide framework for human-systems integration
• National Research Council report (2007)
• Integrates strengths of current process models
– But not their weaknesses
• Facilitates transition from existing practices
– Electronic Process Guide (2009)
Copyright © USC-CSSE 62© USC-CSSE
-
5/26/2014 Copyright © USC-CSSE 63
Incremental Commitment in Gambling
• Total Commitment: Roulette
– Put your chips on a number
• E.g., a value of a key performance parameter
– Wait and see if you win or lose
• Incremental Commitment: Poker, Blackjack
– Put some chips in
– See your cards, some of others’’’’ cards
– Decide whether, how much to commit to proceed
63© USC-CSSE
-
5/26/2014
The Incremental Commitment Spiral Process: Phased View
Anchor Point Milestones
Synchronize, stabilize concurrency via FEDs
Risk patterns determine life cycle process
64© USC-CSSE
-
5/26/2014
ICSM Activity
Levels for
Complex
Systems
65© USC-CSSE
-
5/26/2014
Anchor Point Feasibility Evidence DescriptionsAnchor Point Feasibility Evidence Descriptions
• Evidence provided by developer and validated by independent experts that:
If the system is built to the specified architecture, it will
– Satisfy the requirements: capability, interfaces, level of service, and evolution
– Support the operational concept
– Be buildable within the budgets and schedules in the plan
– Generate a viable return on investment
– Generate satisfactory outcomes for all of the success-critical stakeholders
• All major risks resolved or covered by risk management plans
• Serves as basis for stakeholders’ commitment to proceed
• Synchronizes and stabilizes concurrent activities
Can be used to strengthen current schedule- or event-based reviews
66© USC-CSSE
-
Evidence-based processes for balancing SQsEvidence-based processes for balancing SQs
• Context: The Incremental Commitment Spiral Model
– Viewpoint diagrams
– Principles Trump Diagrams
– Principle 4: Evidence and Risk-Based Decisions
• Evidence-based reviews enable early risk resolution
– They require more up-front systems engineering effort
– They have a high ROI for high-risk projects
– They synchronize and stabilize concurrent engineering
– The evidence becomes a first-class deliverable
• It requires planning and earned value management
• They can be added to traditional review processes
7-14-2015 67
-
08/04/2009 ©USC-CSSE 68
Nature of Feasibility Evidence
• Not just traceability matrices and PowerPoint charts
• Evidence can include results of
– Prototypes: of networks, robots, user interfaces, COTS interoperability
– Benchmarks: for performance, scalability, accuracy
– Exercises: for mission performance, interoperability, security
– Models: for cost, schedule, performance, reliability; tradeoffs
– Simulations: for mission scalability, performance, reliability
– Early working versions: of infrastructure, data fusion, legacy compatibility
– Previous experience
– Combinations of the above
• Validated by independent experts
– Realism of assumptions
– Representativeness of scenarios
– Thoroughness of analysis
– Coverage of key off-nominal conditions
-
08/04/2009 ©USC-CSSE 69
Common Examples of Inadequate Evidence
1. Our engineers are tremendously creative. They will find a solution
for this.
2. We have three algorithms that met the KPPs on small-scale nominal
cases. At least one will scale up and handle the off-nominal cases.
3. We’ll build it and then tune it to satisfy the KPPs
4. The COTS vendor assures us that they will have a security-certified
version by the time we need to deliver.
5. We have demonstrated solutions for each piece from our NASA,
Navy, and Air Force programs. It’s a simple matter of integration to
put them together.
-
08/04/2009 ©USC-CSSE 70
Examples of Making the Evidence Adequate
1. Have the creative engineers prototype and evaluate a solution on some key nominal and off-nominal scenarios.
2. Prototype and evaluate the three examples on some key nominal and off-nominal scenarios
3. Develop prototypes and/or simulations and exercise them to show that the architecture will not break while scaling up or handling off-nominal cases.
4. Conduct a scaled-down security evaluation of the current COTS product. Determine this and other vendors’ track records for getting certified in the available time. Investigate alternative solutions.
5. Have a tiger team prototype and evaluate the results of the simple matter of integration.
-
08/04/2009 ©USC-CSSE 71
FED Development Process FrameworkFED Development Process Framework
• As with other ICM artifacts, FED process and content are
risk-driven
• Generic set of steps provided, but need to be tailored to
situation
– Can apply at increasing levels of detail in Exploration, Validation,
and Foundations phases
– Can be satisfied by pointers to existing evidence
– Also applies to Stage II Foundations rebaselining process
• Examples provided for large simulation and testbed
evaluation process and evaluation criteria
-
08/04/2009 ©USC-CSSE 72
Steps for Developing Feasibility EvidenceSteps for Developing Feasibility Evidence
A. Develop phase work-products/artifacts
– For examples, see ICM Anchor Point Milestone Content charts
B. Determine most critical feasibility assurance issues
– Issues for which lack of feasibility evidence is program-critical
C. Evaluate feasibility assessment options
– Cost-effectiveness, risk reduction leverage/ROI, rework avoidance
– Tool, data, scenario availability
D. Select options, develop feasibility assessment plans
E. Prepare FED assessment plans and earned value milestones
– Try to relate earned value to risk-exposure avoided rather than
budgeted cost
“Steps” denoted by letters rather than numbers
to indicate that many are done concurrently
-
08/04/2009 ©USC-CSSE 73
Steps for Developing Feasibility Evidence (continued)Steps for Developing Feasibility Evidence (continued)
F. Begin monitoring progress with respect to plans– Also monitor project/technology/objectives changes and adapt plans
G. Prepare evidence-generation enablers– Assessment criteria– Parametric models, parameter values, bases of estimate– COTS assessment criteria and plans– Benchmarking candidates, test cases– Prototypes/simulations, evaluation plans, subjects, and scenarios– Instrumentation, data analysis capabilities
H. Perform pilot assessments; evaluate and iterate plans and enablers
I. Assess readiness for Commitment Review– Shortfalls identified as risks and covered by risk mitigation plans– Proceed to Commitment Review if ready
J. Hold Commitment Review when ready; adjust plans based on review outcomes
-
08/04/2009 ©USC-CSSE 74
Large-Scale
Simulation and
Testbed FED
Preparation
Example
-
08/04/2009 ©USC-CSSE 75
Conduct
DCR/MS-B
Review Meeting
• Discuss, resolve
issues• Identify action plans,
risk mitigation plans
Review Entrance Criteria• Successful FCR/MS-A
• Required inputs available
Review Inputs
• DCR/MS-B Package: operational concept,
prototypes, requirements, architecture, life cycle
plans, feasibility evidence
Perform Pre-Review
Technical Activities
• Experts, stakeholders review DRC/MS-B package,
submit issues• Developers prepare
responses to issues
Review Exit Criteria• Evidence of DCR/MS-B
Package Feasibility
validated• Feasibility shortfalls
identified as risks,
covered by risk mitigation plans
• Stakeholder agreement on DCR/MS-B package content
• Stakeholder commitment to support Development phase
• All open issues have action plans
• Otherwise, review fails
Review Outputs
• Action plans• Risk mitigation plans
Overview of Example Review Process: DCR/MS-B
Post Review Tasks
• Publish review minutes
• Publish and track open action items• Document lessons learned
Review Planning Tasks
• Collect/distribute review products
• Determine readiness
• Identify stakeholders, expert reviewers• Identify review leader and recorder
• Identify location/facilities
• Prepare/distribute agenda
-
08/04/2009 ©USC-CSSE 76
Lean Risk Management Plan: Fault Tolerance Prototyping
1. Objectives (The “Why”)
– Determine, reduce level of risk of the fault tolerance features causing unacceptable performance (e.g., throughput, response time, power consumption)
– Create a description of and a development plan for a set of low-risk fault tolerance features
2. Deliverables and Milestones (The “What” and “When”)
– By week 3
1. Evaluation of fault tolerance option
2. Assessment of reusable components
3. Draft workload characterization
4. Evaluation plan for prototype exercise
5. Description of prototype
– By week 7
6. Operational prototype with key fault tolerance features
7. Workload simulation
8. Instrumentation and data reduction capabilities
9. Draft Description, plan for fault tolerance features
– By week 10
10. Evaluation and iteration of prototype
11. Revised description, plan for fault tolerance features
-
08/04/2009 ©USC-CSSE 77
Lean Risk Management Plan: Fault Tolerance Prototyping (continued)
• Responsibilities (The “Who” and “Where”)– System Engineer: G. Smith
• Tasks 1, 3, 4, 9, 11, support of tasks 5, 10
– Lead Programmer: C. Lee
• Tasks 5, 6, 7, 10 support of tasks 1, 3
– Programmer: J. Wilson
• Tasks 2, 8, support of tasks 5, 6, 7, 10
• Approach (The “How”)– Design-to-Schedule prototyping effort
– Driven by hypotheses about fault tolerance-performance effects
– Use multicore processor, real-time OS, add prototype fault tolerance features
– Evaluate performance with respect to representative workload
– Refine Prototype based on results observed
• Resources (The “How Much”)$60K - Full-time system engineer, lead programmer, programmer (10 weeks)*(3 staff)*($2K/staff-
week)
$0K - 3 Dedicated workstations (from project pool)
$0K - 2 Target processors (from project pool)
$0K - 1 Test co-processor (from project pool)
$10K - Contingencies
$70K - Total
-
Feasibility Evidence DID OverviewFeasibility Evidence DID Overview
• Tailorable up from simple-project version
– Criteria provided for simple, intermediate, and complex projects
• Complex-project version based on key SE studies
– NRC Early Systems Engineering study
– Services Probability of Program Success frameworks
– NDIA-SEI SE Effectiveness Survey
– INCOSE SE Leading Indicators
– SISAIG SE Early Warning Indicators
• Organized into Goal-Critical Success Factor-Question Hierarchy
– Tailorable up at each hierarchy level
10/24/2012 ©USC-CSSE 78
-
10/24/2012 ©USC-CSSE 79
Criteria for Simple, Intermediate, and Complex Projects
-
10/24/2012 ©USC-CSSE 80
FED DID General Information for Simple Projects
-
10/24/2012 ©USC-CSSE 81
-
10/24/2012 ©USC-CSSE 82
Can Tailor DID Up at Goal or CSF Level
-
Example of Tailoring-Up UseExample of Tailoring-Up Use
• Quantitative Methods, Inc. (QMI) is a leader in developing
complex object-recognition systems (ORS)
• Coast Guard contracting with QMI for an ORS
– Simpler than ORSs developed for Navy, Air Force
– But includes new university-research algorithms
– Uncertainty in performance leads to KPP ranges in contract
• Only a few of Goals and CSFs need to be tailored in– CSF 1.1Understanding of stakeholder needs: key performance parameters
– Question 1 on KPP identification covered by KPP ranges
– Question 3 on effectiveness verification tailored in
– CSF 1.2Concurrent exploration of solution opportunities tailored in to address alternative high-performance-computing platforms
– CSF 1.3 on system scoping and CSF 1.4 on requirements prioritization tailored out due to being already covered
10/24/2012 ©USC-CSSE 83
-
Spreadsheet Tool Enables Risk Monitoring
That can be independently validated
Spreadsheet Tool Enables Risk Monitoring
That can be independently validated
3/3/2010 84
-
08/04/2009 85
Need for FED in Large Systems of Systems Need for FED in Large Systems of Systems
0
10
20
30
40
50
60
70
80
90
100
0 10 20 30 40 50 60
Percent of Time Added for Architecture and
Risk Resolution
Pe
rce
nt
of
Tim
e A
dd
ed
to
Ov
era
ll S
ch
ed
ule
Percent of Project Schedule Devoted to Initial
Architecture and Risk Resolution
Added Schedule Devoted to Rework
(COCOMO II RESL factor)
Total % Added Schedule
10000
KSLOC
100 KSLOC
10 KSLOC
Sweet Spot
Sweet Spot Drivers:
Rapid Change: leftward
High Assurance: rightward
©USC-CSSE
-
08/04/2009 ©USC-CSSE 86
Case Study: CCPDS-R Project Overview
Characteristic CCPDS-R
Domain Ground based C3 developmentSize/language 1.15M SLOC AdaAverage number of people 75
Schedule 75 months; 48-month IOCProcess/standards DOD-STD-2167A Iterative development
Rational host
DEC host
DEC VMS targets
Contractor TRWCustomer USAFCurrent status Delivered On-budget, On-schedule
Environment
RATIONALS o f t w a r e C o r p o r a t I o n
References: [Royce, 1998], Appendix D; ICSM book, Chapter 4
-
08/04/2009 ©USC-CSSE 87
CCPDS-R Reinterpretation of SSR, PDR
Development Life Cycle
ConstructionElaborationInception
Competitive design phase:•Architectural prototypes•Planning•Requirements analysis
Contract awardArchitecture baseline under change control
Early delivery of “alpha” capability to user
Architecture Iterations Release Iterations
SSR IPDR PDR CDR
0 5 10 15 20 25
RATIONALS o f t w a r e C o r p o r a t I o n
(LCA)
(LCO)High-risk prototypes
Working Network OS with validated failover
-
08/04/2009 ©USC-CSSE 88
CCPDS-R Results: No Late 80-20 Rework
�Architecture first-Integration during the design phase-Demonstration-based evaluation
�Configuration baseline change metrics:
RATIONALS o f t w a r e C o r p o r a t I o n
Project Development Schedule15 20 25 30 35 40
40
30
20
10
Design Changes
Implementation Changes
MaintenanceChanges and ECP’s
HoursChange
�Risk Management
-
ConclusionsConclusions
• System qualities (SQs) are success-critical
– Major source of project overruns, failures
– Significant source of stakeholder value conflicts
– Poorly defined, understood
– Underemphasized in project management
• SQs ontology clarifies nature of system qualities
– Using value-based, means-ends hierarchy
– Identifies variation types: referents, states, processes, relations
– Relations enable SQ synergies and conflicts identification
• Evidence-based processes best for balancing SQs
7-14-2015 89
-
08/04/2009 ©USC-CSSE 90©USC-CSSE
List of Acronyms
CD Concept Development
CP Competitive Prototyping
DCR Development Commitment
Review
DoD Department of Defense
ECR Exploration Commitment
Review
EV Expected Value
FCR Foundations Commitment
Review
FED Feasibility Evidence
Description
GAO Government Accounting
Office
ICM Incremental Commitment
Model
KPP Key Performance Parameter
MBASE Model-Based Architecting
and Software Engineering
OCR Operations Commitment
Review
RE Risk Exposure
RUP Rational Unified Process
V&V Verification and Validation
VB Value of Bold approach
VCR Valuation Commitment
Review
-
08/04/2009 ©USC-CSSE 91
B. Boehm, J. Lane, S. Koolmanojwong, R. Turner, The Incremental Commitment Spiral Model, Addison Wesley, 2014.
B. Boehm, “Anchoring the Software Process,” IEEE Software, July 1996.
B. Boehm, A.W. Brown, V. Basili, and R. Turner, “Spiral Acquisition of Software-Intensive Systems of Systems,” Cross Talk, May 2004, pp. 4-9.
B. Boehm and J. Lane, “Using the ICM to Integrate System Acquisition, Systems Engineering, and Software Engineering,” CrossTalk, October 2007, pp. 4-9.
J. Maranzano et. al., “Architecture Reviews: Practice and Experience,” IEEE Software, March/April 2005.
R. Pew and A. Mavor, Human-System Integration in the System Development Process: A New Look, National Academy Press, 2007.
W. Royce, Software Project Management, Addison Wesley, 1998.
R. Valerdi, “The Constructive Systems Engineering Cost Model,” Ph.D. dissertation, USC, August 2005.
CrossTalk articles: www.stsc.hill.af.mil/crosstalk
References
-
Backup chartsBackup charts
7-14-2015 92
-
7-14-2015 93
-
Metrics Evaluation Criteria• Correlation with quality: How much does the metric relate with our notion of
software quality? It implies that nearly all programs with a similar value of the
metric will possess a similar level of quality. This is a subjective correlational
measure, based on our experience.I
• Importance: How important is the metric and are low or high values preferable
when measuring them? The scales, in descending order of priority are: Extremely
Important, Important and Good to have
• Feasibility of automated evaluation: Are things fully or partially automable and
what kinds of metrics are obtainable?
• Ease of automated evaluation: In case of automation how easy is it to compute
the metric? Does it involve mammoth effort to set up or can it be plug-and-play or
does it need to be developed from scratch? Any OTS tools readily available?
• Completeness of automated evaluation: Does the automation completely capture
the metric value or is it inconclusive, requiring manual intervention? Do we need
to verify things manually or can we directly rely on the metric reported by the
tool?
• Units: What units/measures are we using to quantify the metric?
7-14-2015 95