Responding to Climate Change: Avoiding the Unmanageable, Managing the Unavoidable
Software Project Management Manage Unavoidable Complexity Avoid Unmanageable Complexity.
-
Upload
gervais-chapman -
Category
Documents
-
view
216 -
download
0
Transcript of Software Project Management Manage Unavoidable Complexity Avoid Unmanageable Complexity.
Software Project Management
Manage Unavoidable Complexity
Avoid Unmanageable Complexity
Project Management: Some Statements
• If you fail to plan, you plan to fail
• Planning seems to increase luck
• Planning and estimating is a key competence for all software engineers
• If you don’t ask for risk information, you are asking for trouble
• Uncertainty must certainly not be stated in uncertain terms
• If you don’t actively attack risks, they will actively attack you
• Insanity: doing the same thing over and over again and expecting a different result
• The most important customer of the project plan is the project team itself
Content
• Project Management Basics
• Requirements Management
• Estimation
• Risk Management
• Project Tracking
– Milestone Trend Analysis
– Earned Value Chart
• Intergroup Co-ordination
• Peer Reviews
• Problem Report Tracking
Project Management Basics
Schedule - lead time - process - planning
Performance - specifications - derivatives - quality - etc.
Cost - human resources - equipment - etc.
If they are all fixed - Don’t appoint a project manager - Keep your fingers crossed
If they can be adjusted - Appoint a project manager - Set priorities
Project Management Processes
Monitor
Schedule
Cost
Performance
Control
PlanPlan
Project Data
Technology Skills Resources
PlanMeasured Data
ProductProject
Project Management
Project Implementation
ReadjustmentDeviations
Project Data
Two main phases in a software project
CommercialReq. Spec.
Functional Requirements Spec.Architecture / Top-Level Design
Software Development Plan
CS CRCD
Concept Confirmation Product Implementation
CS: Concept StartCD: Commitment Date (commitment to cost, schedule, performance)CR: Commercial Release
Content of a Software Development Plan
• Objectives, Scope (with reference to specification/architecture)• Deliverables• Activities (work breakdown structure with activities of max. 2 - 3
weeks)• Assumptions, Dependencies, Constraints (e.g., required hardware
models, intergroup dependencies)• Estimates (size, effort, schedule, computer resources, + justification)• Project Organization and Resource Allocation• Schedule of Activities (including project milestones)• Risk Management (identification, resolution, monitoring of risks)• If applicable, Subcontract Management Plan(s)• Progress reporting and communication structure• Software Development Environment (methods/tools/standards)• Project Quality Plan (inc. (intermediate) release acceptance criteria)• Software Configuration Management Plan
Requirements Management
• At the beginning of the project:– Collect commercial requirements as input
– Write Requirements Specification
– Review with all stakeholders
– Formally approve the specification and use it as baseline for the project activities
• What goes wrong?– Stakeholders don’t know what they want
– Not all stakeholders identified (e.g., service, factory) Many change requests during the project
• Possible solutions:– Work on requirements in a pre-project phase
– Rapid prototyping / incremental deliveries
– Formal change control
Estimation
• The only unforgivable failure is to not learn from past failures
• Project failures are mostly caused by inflated and unreasonable expectations
• Definitions of Estimate:Default
Prediction that meets the enforced deadline
AlternativeMost optimistic prediction that has a non-zero
probability of coming true
ProposedEqually likely to be above or below actual result
Causes of inaccurate estimates
• Frequent requests for changes
• Overlooked tasks
• User’s lack of understanding of their own requirements
• Insufficient communication and understanding between users and analysts
• Poor or imprecise problem definition
• Insufficient analysis when developing estimate
• Lack of co-ordination between involved disciplines
• Lack of an adequate methodology or guidelines for estimating (lack of historical data)
From:Nine Management Guidelines for Better Cost EstimatingAlbert L. Lederer and Jayesh PrasadCommunications of the ACM, Vol. 35, Nr. 2, Febr. 1992
How to develop estimates? - 1• Immature (lack of) process: estimated date = target date• First improvement step: Size/effort estimates are made based on
expert opinions and the project team’s own historical data using Wide Band Delphi techniques
• Wide Band Delphi:– A facilitator is assigned, who organises an estimation meeting and
presents each expert with an estimation form.– The participants discuss estimation issues using their knowledge of the
requirements and the software architecture.– Each participant fills out the estimation form anonymously and hands it
over to the facilitator. The participants should not share their estimates.– The facilitator prepares and distributes a summary of the estimates with
the averages and standard deviations of all estimates.– The facilitator initiates a discussion between the experts focussing on
those points where their estimates varied widely.– After the discussion, the experts fill out the estimation forms, again
anonymously, and hand them over to the facilitator.The last 3 steps are repeated until consensus is reached.
How to develop estimates? - 2
• What can go wrong with Wide Band Delphi?– No consensus has been reached when the plan must be presented
to management– Average of the individual estimates is taken as the final estimate– But: when the deviation remains high, you only know that you
have insufficient insight
• Next improvement step: Size/effort estimates are made using historical data from the Organization’s Software Process Database (with an explicit effort to use data from “similar” projects)– Use expert opinion to determine the “project type”– Use historical data from the same type of projects for productivity
figures (from “size” to “effort”)
Project type classification
Large-scale simulation
DOD management information system
Enterprise information systems
Telecom switch
DOD weapon system
National Air Traffic Control System
Commercial compiler
Embedded automotive application
Small scientific simulation
Business spreadsheet
Enterprise application (such as order entry)
Higher Technical Complexity Embedded, real-time, distributed, fault-tolerant High-performance, portable Unprecedented, architecture re-engineering
Lower Technical Complexity Straightforward automation, single thread Interactive performance, single platform Many precedent systems, application re-engineering
Higher Managerial Complexity Large scale (multi-site) Contractual Many stakeholders “Projects”
Lower Managerial Complexity Smaller scale Informal Few stakeholders “Products”
Average software project 5 to 10 people 10 to 12 months 3 to 5 external interfaces some unknowns, risks
From: Software Project Managementby Walker Royce, Addison-Wesley,1998, ISBN 81-7808-013-3
Risk Management
• List top-x of risks
• Add actions to prevent the risk from occurring• Add contingency plans (what to do when the risk materializes)
• Plan capacity for a certain percentage of the total effort required for contingency plans
Risk Probability P (1-5)
Effect Seriousness S (1-5)
Exposure P * S (1-25)
Risk Probability P (1-5)
Effect Seriousness S (1-5)
Exposure P * S (1-25)
Aversion action
Action holder
Contingency
Project Tracking
Dilbert by Scott Adams
Content of a Software Progress Report - 1
• Management Summary and Attention Points– short status overview– issues requiring immediate management action
• Project Dashboard– Milestone Trend Analysis– Earned Value Chart
• Project Status– major results achieved during the reporting period– major expectations for the coming period– complete list of deliverables/results– project definition changes (planned vs. actual number of change
requests)– main problems and issues
• Risk Management– identification of risks + probability + effects + seriousness– actions + action holders + action status
Content of a Software Progress Report - 2
• Project Staffing– planned versus actual staffing
• Budget– budgeted versus actual costs
• Problem Reports– % of test cases passed / failed / not yet executed– expected versus actual numbers– status of CR/PRs (entered, analyzed, solved, tested, closed,
rejected)
• Performance Tracking– actual size figures (versus estimates)– actual usage of critical computer resources (versus estimates)
• Inter-group Deliverables and Issues– identification of all inter-group deliverables and issues– actions + action holders + action status
• Other Issues
Milestone Trend Analysis
jan
feb
mar
may
june
july
aug
sep
oct
nov
dec
apr
jan feb mar apr may june july aug sep oct nov dec
Actual time
jan
feb
mar
may
june
july
aug
sep
oct
nov
dec
apr
jan feb mar apr may june july aug sep oct nov dec
Actual time
jan
feb
mar
may
june
july
aug
sep
oct
nov
dec
apr
jan feb mar apr may june july aug sep oct nov dec
Actual time
Commercial Release
Design Release
Product Range Start
Commercial Release
Design Release
Product Range Start
Example Milestone Trend Analysis
What can you learn of these MTA’s?
Simple WBS Example
Deliverable Effort Human Resources Start & End Dates Relations Cum Eff
D1 2 Adam, Betty 9901.1 - 9901.5 - 2
D2 1 Adam 9902.1 - 9902.5 D1
D3 1 Betty 9902.1 - 9902.5 D1
D4 1 Adam 9903.1 - 9903.5 D1
D5 1 Betty 9903.1 - 9903.5 D1
R1 1 Adam 9904.1 - 9904.5 D2
R2 2 Betty, Carol 9904.1 - 9904.5 D1
R3 3 Adam, Betty, Carol 9905.1 - 9905.5 D3 12
R4 1 Betty 9906.1 - 9906.5 D5
R5 2 Adam, Carol 9906.1 - 9906.5 D4
R6 2 Adam, Carol 9907.1 - 9907.5 D4
R7 2 Betty, Dave 9907.1 - 9907.5 D5
R8 4 Adam, Betty, Carol, Dave 9908.1 - 9908.5 D1 23
R9 1 Adam 9909.1 - 9909.5 D2
R10 2 Betty, Dave 9909.1 - 9909.5 D3
R11 1 Carol 9909.1 - 9909.5 D4
T1 3 Betty, Carol, Dave 9910.1 - 9910.5 R1 - R2 - R8 - R9 30
T2 1 Betty 9911.1 - 9911.5 R4 - R5 - R6 - R7 -R 11
T3 1 Carol 9911.1 - 9911.5 R3 - R10
T4 2 Betty, Carol 9912.1 - 9912.5 T1 -T2 -T3 34
E1 1 Betty 9913.1 - 9913.5 T4 35
19
27
32
4
6
9
15
Schedule Example
9901 9902 9903 9904 9905 9906 9907 9908 9909 9910 9911 9912 9913
D1
R2
R8
D2R1
R9
T1
D3
R3
R10
T3
D4
R5
R6
R11
D5
R4
R7
T2
T4
E1
Earned Value Chart (incomplete)
0
5
10
15
20
25
30
35
40
1 2 3 4 5 6 7 8 9 10 11 12 13
Weeks
Eff
ort Planned Effort and
Value
Schedule Example
9901 9902 9903 9904 9905 9906 9907 9908 9909 9910 9911 9912 9913
D1
R2
R8
D2R1
R9
T1
D3
R3
R10
T3
D4
R5
R6
R11
D5
R4
R7
T2
T4
E1
Earned Value
Planned Effort and Value
Tangible Deliverables - Earned Value
Time Cards - Effort Spent
Deliverable Effort Human Resources Start & End Dates Relations Cum Eff
D1 2 Adam, Betty 9901.1 - 9901.5 - 2
D2 1 Adam 9902.1 - 9902.5 D1
D3 1 Betty 9902.1 - 9902.5 D1
D4 1 Adam 9903.1 - 9903.5 D1
D5 1 Betty 9903.1 - 9903.5 D1
R1 1 Adam 9904.1 - 9904.5 D2
R2 2 Betty, Carol 9904.1 - 9904.5 D1
R3 3 Adam, Betty, Carol 9905.1 - 9905.5 D3 12
R4 1 Betty 9906.1 - 9906.5 D5
R5 2 Adam, Carol 9906.1 - 9906.5 D4
R6 2 Adam, Carol 9907.1 - 9907.5 D4
R7 2 Betty, Dave 9907.1 - 9907.5 D5
R8 4 Adam, Betty, Carol, Dave 9908.1 - 9908.5 D1 23
R9 1 Adam 9909.1 - 9909.5 D2
R10 2 Betty, Dave 9909.1 - 9909.5 D3
R11 1 Carol 9909.1 - 9909.5 D4
T1 3 Betty, Carol, Dave 9910.1 - 9910.5 R1 - R2 - R8 - R9 30
T2 1 Betty 9911.1 - 9911.5 R4 - R5 - R6 - R7 -R 11
T3 1 Carol 9911.1 - 9911.5 R3 - R10
T4 2 Betty, Carol 9912.1 - 9912.5 T1 -T2 -T3 34
E1 1 Betty 9913.1 - 9913.5 T4 35
19
27
32
4
6
9
15
Earned Value Chart
Earned Value
0
5
10
15
20
25
30
35
40
1 2 3 4 5 6 7 8 9 10 11 12 13
Weeks
Eff
ort
Effort Spent
Planned Effort and Value
Slip of 3 weeks, 50% delay20 % under budget, 50 % effort overrunCarol still at other project, lower productivity than estimatedDepending on priority setting: less functionality and/or more budget
and/or delayed release
Schedule :Budget :Analysis :Mitigation :
Maturity of project tracking process
• Lack of process: Constant crisis and re-actions
• First improvement: Corrective actions are taken when the actual results deviate significantly (based on team judgment) from the plan
• Next improvement: Corrective actions when the actual results deviate significantly (more than pre-defined thresholds (based on experience)) from the plan
• Final improvement: Corrective actions when the actual results deviate significantly (i.e. outside the Upper and Lower Control Limits (obtained from Statistical Process Control)) from the plan
Inter-group co-ordination
• Required deliverables from other groups (internal and/or external) are listed under “Dependencies” in the project plan: it is assumed that these deliveries will be on time and with the right quality
• Improvement step: all (intermediate) deliverables from/to other groups are explicitly listed with delivery date and person responsible, and quality criteria are defined and agreed upon for all deliverables from/to other groups
• Next improvement step: – pro-active checking of the status of the expected
deliverables – in case of problems, a joint effort is started to solve
the issue– in case of disagreement, the issue is escalated via
pre-defined and agreed upon channels
Peer Reviews
• 2 types of review:– To improve quality through detection of defects by
peers as early as possible – To formally approve a deliverable by e.g., project
leader, system architect, management• Lack of process: document quickly read by reviewers• First improvements (to establish stable process):
– Review meetings and preparation time planned– Checklists (per type of documented) used and
maintained– Metrics collected (preparation time, review meeting
time, number of major/minor defects found, etc.)• Next improvements (to improve yield):
– Ensure right persons are involved in review– Measure and ensure effectiveness of reviews
Problem Report Handling: Maturity Grid
Severity of problem:• S: Safety, the product causes a dangerous
situation• A: Product cannot be shipped, most
customers will return it– Errors in basic functionality– Major errors in advanced functions– Non-compliance to standards– Errors that are irritating for every customer
• B: Product can be sold, but critical customers will return it– Minor errors in the basic functions – Major errors in the functions difficult to reach
(>2 levels deep in the menus) – Major errors in stress tests
• C: Customers tolerate the defect or do not see it– Minor errors in the functions difficult to reach – Minor errors in stress tests
• D: The defect is accepted for the product
Evolution of problem• 4: Problem entered, cause
not yet known • 3: Problem analyzed,
cause known, problem assigned to engineer
• 2: Solution implemented and tested by engineer
• 1: Solution tested by tester, included in formal release, and declared solved
• 0: Solution verified by submitter and declared closed
Problem Report Handling: Example Maturity Grid
S A B C D
4 0 1 3 12 0
3 0 3 3 8 0
2 0 3 11 24 0
1 0 5 31 56 0
0 1 26 79 125 7
Problem Report Handling
• All tests done and Maturity Grid shows:
• Can we start releasing the product?
• It depends !
S A B C D
4 0 0 0 0 0
3 0 0 0 0 0
2 0 1 2 7 0
1 0 5 17 12 0
0 0 164 206 343 53
0
20
40
60
80
100
120
140
320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338
PRs entered PRs solved PRs still open
0
20
40
60
80
100
120
140
160
320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338
PRs entered PRs solved PRs still open
Final Project Stage - PR Curve
FW-R33/55, CDR 800/820 set software: open A+B+C PRs
0
50
100
150
200
250
300
wk107
wk109
wk111
wk113
wk115
wk117
wk119
wk121
wk123
wk125
wk127
wk129
wk131
wk133
wk135
wk137
Asymptotic
The Asymptotic Period
How to limit the “asymptotic” period for maturing the software?Find as many defects as possible as early as possible!
Measures to be taken:• Ensure (plan + track) that all required testing is done on time
– Integration, functional, system, electrical, torture room, field, factory, service, conformance, stress/stability
• Ensure multiple rounds of testing are done on time– After fixing an “A” problem (e.g. certain functionality does not work
at all) found in the first round, multiple “B” and “C” problems will probably be found in the next round
• Prioritize PRs and ensure PRs are closed when they have been solved
Longer term improvements:• Reuse test cases (increasing test coverage)
– Random testing often results in PRs and should result in new test cases
– Measure test coverage, track % of test cases executed and passed