Software Project Management

392
SOFTWARE PROJECT MANAGEMENT (SPM) STUDY MATERIAL This document contains the study material for Software Project Management as per the JNTU syllabus. CHAPTER # CHAPTER PAGE NUMBER NO. OF QUESTIONS UNIT - I 1 Conventional Software Management 4 3 2 Evolution of Software Economics 13 4 UNIT – II 3 Improving Software Economics 18 10 UNIT – III 4 The Old Way and the New 32 4 5 Life-Cycle Phases 40 3 UNIT – IV 6 Artifacts of the Process 46 10 7 Model-Based Software Architecture 63 6 UNIT – V 8 Workflows of the Process 67 3 9 Checkpoints of the Process 72 4 10 Iterative Process Planning 79 4 UNIT – VI 11 Project Organizations and Responsibilities 89 4 12 Process Automation 97 7 UNIT – VII 13 Project Control and Process Instrumentation 111 6 14 Tailoring the Process 125 9 UNIT – VIII 15 Modern Project Profiles 133 7 16 Next-Generation Software Economics 140 4 17 Modern Process Transitions 146 2 Appendix D CCPDS-R Case Study 150 18

Transcript of Software Project Management

SOFTWARE PROJECT MANAGEMENT (SPM) STUDY MATERIAL This document contains the study material for Software Project Management as per the JNTU syllabus. CHAPTER # CHAPTER PAGE NUMBER NO. OF QUESTIONS

UNIT - I

1 Conventional Software Management 4 3 2 Evolution of Software Economics 13 4

UNIT II

3 Improving Software Economics 18 10

UNIT III

4 The Old Way and the New 32 4 5 Life-Cycle Phases 40 3

UNIT IV UNIT V

6 Artifacts of the Process 46 10 7 Model-Based Software Architecture 63 6 8 Workflows of the Process 67 3 9 Checkpoints of the Process 72 4 10 Iterative Process Planning 79 4

UNIT VI

11 Project Organizations and Responsibilities 89 4 12 Process Automation 97 7

UNIT VII

13 Project Control and Process Instrumentation 111 6 14 Tailoring the Process 125 9

UNIT VIII

15 Modern Project Profiles 133 7 16 Next-Generation Software Economics 140 4 17 Modern Process Transitions 146 2 Appendix D CCPDS-R Case Study 150 18 SOFTWARE PROJECT MANAGEMENT (SPM) STUDY MATERIAL

III V SEMESTER MC 5.5.1 SOFTWARE PROJECT MANAGEMENT TEACHING PLAN Unit/ Item No. Topic Book Reference No. of periods UNIT I 1 Conventional Software Management 1.1 The Waterfall Model 6 17 2 1.2 Conventional Software Management Performance 17 20 1 2 Evolution of Software Economics 2.1 Software Economics 21 26 1 2.2 Pragmatic Software Cost Estimation 26 30 2 UNIT II 3 Improving Software Economics 3.1 Reducing Software Product Size 33 40 2 3.2 Improving Software Processes 40 43 1 3.3 Improving Team Effectiveness 43 46 1 3.4 Improving Automation through Software Environments 46 48 1 3.5 Achieving Required Quality 48 51 1 3.6 Peer Inspections: A Pragmatic View 51 54 2 UNIT III 4 The Old Way and the New 4.1 The Principles of Conventional Software Engineering 55 63 2 4.2 The Principles of Modern Software Management 63 66 2 4.3 Transitioning to an Iterative Process 66 68 1 5 Life-Cycle Phases 5.1 Engineering and Production Stages 74 76 1 5.2 Inception Phase 76 77 1 5.3 Elaboration Phase 77 79 1 5.4 Construction Phase 79 80 1 5.5 Transition Phase 80 82 2 UNIT IV 6 Artifacts of the Process 6.1 The Artifact Sets 84 96 3 6.2 Management Artifacts 96 103 2 6.3 Engineering Artifacts 103 105 2 6.4 Pragmatic Artifacts 105 108 1 7 Model-Based Software Architecture 7.1 Architecture: A Management Perspective 110 111 1 7.2 Architecture: A Technical Perspective 111 116 1 SOFTWARE PROJECT MANAGEMENT (SPM)

STUDY MATERIAL UNIT V 8 Workflows of the Process 8.1 Software Process Workflows 118 121 1 8.2 Iteration Workflows 121 124 1 9 Checkpoints of the Process 9.1 Major Milestones 126 132 2 9.2 Minor Milestones 132 133 1 9.3 Periodic Status Assessments 133 134 1 10 Iterative Process Planning 10.1 Work Breakdown Structures 139 146 2 10.2 Planning Guidelines 146 149 1 10.3 The Cost and Schedule Estimating Process 149 150 1 10.4 The Iteration Planning Process 150 153 1 10.5 Pragmatic Planning 153 154 1 UNIT VI 11 Project Organizations and Responsibilities 11.1 Line-of-Business Organizations 156 158 1 11.2 Project Organizations 158 165 2 11.3 Evolution of Organizations 165 166 1 12 Process Automation 12.1 Tools: Automation Building Blocks 168 172 1 12.2 The Project Environment 172 - 186 2 UNIT VII 13 Project Control and Process Instrumentation 13.1 The Seven Core Metrics 188 190 1 13.2 Management Indicators 190 196 2 13.3 Quality Indicators 196 199 1 13.4 Life-Cycle Expectations 199 201 1 13.5 Pragmatic Software Metrics 201 202 1 13.6 Metrics Automation 202 208 1 14 Tailoring the Process 14.1 Process Discriminants 209 218 2 14.2 Example: Small-Scale Project versus Large-Scale Project 218 220 1 UNIT VIII 15 Modern Project Profiles 15.1 Continuous Integration 226 227 1 15.2 Early Risk Resolution 227 228 1 15.3 Evolutionary Requirements 228 229 1 15.4 Teamwork among Stakeholders 229 231 1 15.5 Top 10 Software Management Principles 231 232 1 15.6 Software Management Best Practices 232 236 1

16 Next-Generation Software Economics 16.1 Next-Generation Cost Models 237 242 2 16.2 Modern Software Economics 242 247 1 17 Modern Process Transitions 17.1 Culture Shifts 248 251 1 17.2 Denouement 251 - 254 1 Total 75 PART I SOFTWARE MANAGEMENT RENAISSANCE Page 4 of 187 Chapter 1 CONVENTIONAL SOFTWARE MANAGEMENT Software Crisis: Flexibility of the software is both a boon and a bane. Boon: it can be programmed to do anything. Bane: because of the anything factor, it becomes difficult to plan, monitor, and control software development. This unpredictability is the basis of what is known as software crisis. A number of analyses were done on the state of the software engineering industry over the last decades. Their findings concluded that the success rate of software projects is very low. Their other findings can be summarized as: 1. Software development is highly unpredictable. Only about 10% of projects are delivered successfully within initial budget and schedule estimates. 2. Rather than the technology advances, it is the management discipline that is responsible for the success or failure of the projects. 3. The level of software scrap and rework is indicative of an immature process. The above three analyses-conclusions, while showing the magnitude of the problem and the state of the current software management, prove that there is much room for improvement.

1.1 THE WATERFALL MODEL

The conventional software process is based on the waterfall model. The waterfall model can be taken as a benchmark of the software development process. As a retrospective, we shall examine the waterfall model theory to critically analyze how the industry ignored much of the theory, but still managed to evolve good and not-so-good practices, particularly while using the modern technologies.

1.1.1 IN THEORY Winston Royces paper Managing the Development of Large Scale Software Systems based on lessons learned while managing large software projects, provides a summary of conventional software management philosophy. Three primary points presented in the above paper are: 1. There are two essential steps common to the development of computer programs analysis and coding. 2. In addition to the above steps several other overhead steps are to be introduced. These steps are: system requirement definition, software requirement definition, program design, and coding. These steps help in managing and controlling the intellectual freedom related to software development [in comparison to physical (development) processes.] The project-profile and the basic steps in developing a large-scale program are: 3. The basic framework described in waterfall model is risky. It is failure-prone. The testing phase taken up towards the end of the development life cycle provides for the first time an opportunity to physically try out the timing, storage, input/output transfers, etc. against what is analyzed, theoretically. If any changes are required in the design, they disrupt the software requirements based on which the design is carried out to become violated. Most of these development risks can be eliminated by following five improvements to the basic waterfall process. The proposed five improvements are: 1. Program design comes first. The first improvement is in terms of introducing a preliminary program design phase between the software requirements generation and the analysis phases. This ensures that the software will not fail because of storage, timing, and data flux. As analysis proceeds in the succeeding phase, the designer should make the analyst aware of the consequences of the storage, timing, and operational constraints.

If the total resources required are insufficient, or the nascent operational design is wrong, it will be recognized at the very early stage. Thereby, the iteration/redoing of the requirements analysis or preliminary design can be taken up without adversely affecting the final design, coding, and testing activities. PART I SOFTWARE MANAGEMENT RENAISSANCE Page 5 of 187 Chapter 1 CONVENTIONAL SOFTWARE MANAGEMENT The steps to be followed for this type of program design are: a) Begin the design process with program designers not analysts or programmers. b) Design, define, and allocate data processing modes even at the risk of being wrong. c) Allocate processing functions, design the database, allocate execution time, define interfaces and processing modes with the operating system, describe input and output processing, and define preliminary operating procedures. d) Write an overview document that is understandable, informative, and current so that every one on the project can gain an elemental understanding of the system. In the modern parlance this program design first is termed as architecture-first development. 2. Document the design. The amount of documentation required in a software project is very large beyond the abilities of the programmers, analysts, or program designers to attempt on their own. The reasons for the requirement of so much documentation are: Each designer must communicate with interfacing designers, managers, and customers.

Analysis CodingAnalysis and coding both involve creative work that directly contributes to the usefulness of the end product. Waterfall Model Part I: The two basic steps to building a program System

requirements Software requirements Analysis Program design Coding Testing Operations Waterfall Model Part II: The large-scale system approach 1. Complete program design before analysis and coding begin. 2. Maintain current and complete documentation 3. Do the job twice, if possible. 4. Plan, control, and monitor testing. 5. Involve the customer Waterfall Model Part III: 5 necessary improvements for this approach to work PART I SOFTWARE MANAGEMENT RENAISSANCE Page 6 of 187 Chapter 1 CONVENTIONAL SOFTWARE MANAGEMENT During early phases documentation is the only design. This volume of documentation will be monetarily rewarded by the support it provides later to test team, maintenance team, and operations personnel who may not be computer literate. The artifacts must be documented in an understandable style to and be available to all the stakeholders and teams. If the advanced notations, languages, browsers, tools, and methods are used then much of the documentation is done along the way and thus, doesnt require much concentration. 3. Do it twice. When a program is developed for the first time, the version finally being delivered to the customer for deployment should be the second version. This second version should be produced after critical review of design/operations is carried out. This can be done with the entire process done in miniature, to a time scale relatively small with respect to the overall effort.

In the first version, a broad view should be taken where trouble spots in the design can easily be sensed, and removed using alternative models keeping the straightforward aspects on backburner as long as possible. Thus, with this broad view, an error-free set of programs should be arrived at. This is a concise and simplistic description of architecture-first development, in which an architecture team is responsible for the initial engineering. Generalizing this practice to a do it N times results in the principle of the modernday iterative development. Software development is highly dependent on human judgment. The first-pass simulation allows experimentation by testing some key hypotheses. This type of testing reduces the scope of the human judgment which, in turn, removes the over-optimism posed in the design due to the human judgment. This is the description of the spirit of iterative development and its inherent advantages for risk management. 4. Plan, control, and monitor testing. Test phase uses the maximum resources of manpower, computer time, and management judgment. This phase has the greatest risk of cost and schedule, also. Testing being a phase taken up almost towards the end of the development cycle, there may not be many alternatives. If there are any problems still remain uncovered and unsolved beyond the above three recommendations, it is in the test phase that some important things can be done, and they include: Employ a team of specialists who are not responsible for the original design. Employ visual/manual inspections to spot the obvious errors like dropped minus signs, missing factors of two, jumps to wrong addresses, etc. Test every logic path. Employ the final checkout on the target computer. In the modern process, testing is a life-cycle activity requiring fewer total resources and uncovers issues far earlier than in the life cycle, when backup alternatives will still be available.

5. Involve the customer. At the design state, the question of what a software does is subject to wide interpretation [, despite a prior agreement between the customer and the designer.] By involving a customer in a formal way, (s)he can be committed at earlier points before final delivery. At three other points of time/development cycle, after the requirements definition stage, the customers commitment improves the insight, judgment, and commitment in the development effort. These three points are: A preliminary software review following the preliminary program design step A sequence of critical software design reviews during program design A final software acceptance review following testing This has been a regular practice in the industry with positive results. Involving the customer with early demonstrations and planned alpha/beta releases is a proven, valuable technique. The above discussion on the issues raised in the paper, only goes on to prove only minor flaws in the theory even when applied in the context of todays technology. PART I SOFTWARE MANAGEMENT RENAISSANCE Page 7 of 187 Chapter 1 CONVENTIONAL SOFTWARE MANAGEMENT The criticism should be targeted at the practice of the approach, which incorporated various unsound and unworkable elements. Past and current practice of the waterfall model approach is referred to as the conventional software management approach or process. The waterfall process is no longer a good framework for modern software engineering practices and technologies. It can be used as the reality benchmark to rationalize a process which is improved and devoid of the fundamental flaws of the conventional process. 1.1.2 IN PRACTICE Projects using the conventional process exhibited the following symptoms characterizing their failure:

Protracted integration and late design breakage Late risk resolution Requirements-driven functional decomposition Adversarial stakeholder relationships Focus on documents and review meetings

Protracted Integration and Late Design BreakageFigure 1-2 illustrates development progress versus time for a typical development project using the waterfall model management process. Progress is defined as percent coded that is demonstratable in its target form. Software that is compilable and executable need not necessarily be complete, compliant, or up to specifications. From the figure we can notice, regarding the development activities, that: Early success via paper designs and thorough briefings Commitment to code late in the life cycle Integration difficulties due to unforeseen implementation issues and interface ambiguities Heavy budget and schedule pressure to get the system working Late and last-minute efforts of non-optimal fixes, with no time for redesign A very fragile, unmaintainable product delivered late Given the immature languages and technologies used in the conventional approach, there was substantial emphasis on perfecting the design before committing it to coding and consequently, it was difficult to understand or make any changes to it. This practice resulted in the use of multiple formats requirements in English preliminary design in flowcharts detailed design in program design languages implementations in the target languages like FORTRAN, COBOL, or C and error-prone, labor-intensive translations between formats. Conventional techniques imposed a waterfall model on the design process. This resulted in late integration and lower performance levels. In this scenario, the entire system was designed on paper, then implemented all at once, then integrated. Only at the end of this process there was scope for system testing to verify the soundness of the fundamental architecture interfaces and structure.

Generally, in conventional processes 40% or more of life-cycle resources are consumed by testing. PART I SOFTWARE MANAGEMENT RENAISSANCE Page 8 of 187 Chapter 1 CONVENTIONAL SOFTWARE MANAGEMENT Table 1-1. Expenditures by activity for a conventional software project ACTIVITY COST Management 5% Requirements 5% Design 10% Code and unit testing 30% Integration and test 40% Deployment 5% Environment 5% Total 100%

Late Risk ResolutionLack of early risk resolution is another serious issue related with the waterfall process. This was due to the focus on early paper artifacts in which the real design, implementation, and integration risks were relatively intangible. The risk profile of waterfall model projects includes four distinct periods of risk exposure, where risk is defined as the probability of missing a cost, schedule, feature, or quality goal. Early in the life cycle as the requirements are being specified the actual risk exposure is highly unpredictable. After a design concept is available even on paper the risk exposure stabilizes. It usually stabilizes at a relatively higher level as there are too few tangible facts as part of an objective assessment. As the system is coded, some of the individual component risks get resolved. As the integration begins, the real system-level quantities and risks become tangible. During this period many real design issues are resolved and engineering trade-offs are made. Resolving these issues late in the life cycle, when there is great inertia inhibiting changes, is very expensive. Late design breakage

Integration begins 100% Development Progress (% Coded) Project Schedule Sequential activities: requirements design coding integration testing Ad hoc text Flowcharts Source code Configuration baseline Requirements analysis Program design Coding and unit testing Format Activity Product Documents Documents Coded units Fragile baselines Protracted integration and testing Figure 1-2. Progress profile of a conventional software project PART I SOFTWARE MANAGEMENT RENAISSANCE Page 9 of 187 Chapter 1 CONVENTIONAL SOFTWARE MANAGEMENT Consequently, projects tend to have a protracted integration phase (Fig 1-2) as major redesign initiatives are implemented. This process tends to resolve the important risks, but by sacrificing the quality and its maintainability. Redesigning may also include tying loose-ends at the last minute and patching up bits and pieces into a coherent single piece. These sorts of changes do not conserve the overall design integrity and its maintainability.

RequirementsDriven Functional DecompositionTraditionally, the software development process has been requirements-driven: attempt is made to provide a precise requirements definition, and An then to implement exactly those requirements. This approach depends on specifying requirements completely and unambiguously before other

development activities can begin. It naively treats all requirements as equally important, and depends on those requirements remaining constant over the software development life cycle. These conditions rarely occur in real world. Specification of requirements is a difficult and important part of the software development process. Virtually every major software program suffers from severe difficulties in requirements specification. The treatment of all requirements as equal wastes away substantial engineering-hours on lessimportant requirements from the driving requirements and wastes effort on paperwork associated with traceability, testability, logistics support, and so on. This paperwork anyway will be discarded later as the more important requirements and subsequent understanding evolve. Another property of the conventional approach is that the requirements are typically specified in a functional manner. The classic waterfall process is built upon the fundamental assumption that the software itself is decomposed into functions. Requirements are then allocated to the resulting components. This decomposition is different from a decomposition based on OOD and the use of existing components. The functional decomposition precludes an architecture-driven approach as it is built around contracts, sub-contracts, and work breakdown structures.

Adversarial Stakeholder RelationshipsThe conventional process results in adversarial stakeholder relationships because of the difficulties of requirements specification, and High Low Project Risk Exposure Risk Exploration Period Risk Elaboration Period Focused Risk

Resolution Period Controlled Risk Management Period Project Life Cycle Figure 1-3. Risk Profile of a conventional software project across its life cycle Requirements Design - Coding Integration Testing PART I SOFTWARE MANAGEMENT RENAISSANCE Page 10 of 187 Chapter 1 CONVENTIONAL SOFTWARE MANAGEMENT the exchange of information solely through paper documents that has information in ad hoc formats. The lack of an uniform and standard notation resulted in subjective reviews and opinionated exchanges of information. Typical sequence of events for most of contractual software efforts: 1. The contractor prepared a draft contract-deliverable document capturing an intermediate artifact and delivered it to the customer for approval. 2. The customer was expected to provide comments within 2 to 4 weeks. 3. The contractor incorporated these comments and submitted in 2 to 4 weeks a final version for approval. This type of one-time review process encouraged high-levels of sensitivity on the part of customers and contractors. The overhead of such a paper exchange review process was intolerable. This resulted in mistrust between the customer-contractor. It made balancing among requirements, schedule, and cost a difficult proposition.

Focus on Documents and Review MeetingsThe conventional process focused more on producing documents in an attempt to describe the software product, than focusing on producing tangible increments of the products themselves. Even milestones are also discussed in meetings in terms of documents only. For the contractors the major job is of producing documentary evidence of meeting milestones

and demonstrating progress to stakeholders, instead of spending their energy on reducing risk and producing quality software. Most design reviews resulted in low engineering value and high cost in terms of the effort and schedule involved in their preparation and conduct.

TABLE 1-2 Results of conventional software project design reviews APPARENT RESULTS REAL RESULTS Only a small percentage of audience understands the software. Big briefing to a diverse audience Briefings and documents expose few of the important assets and risks of complex software systems There is no tangible evidence of compliance. A design that appears to be compliant Compliance with ambiguous requirements is of little value. Coverage of requirements (typically Few (tens) are design drivers hundreds) Dealing with all requirements dilutes the focus on the critical drivers A design considered innocent until The design is always guilty. proven guilty Design flaws are exposed later in the life cycle. Diagnosing these five symptoms of: Protracted integration and late design breakage Late risk resolution Requirements-driven functional decomposition Adversarial stakeholder relationships Focus on documents and review meetings can be difficult, particularly in the early phases of the life cycle as by then the problems with the conventional model would be cured. Modern software, hence, should use mechanisms that assess project status early in the life-cycle and continue with objective, periodic checkups.

1.2 CONVENTIONAL SOFTWARE MANAGEMENT PERFORMANCE

Barry Boehms Industrial Software Metrics Top 10 List is a good, objective characterization of the state of software development. PART I SOFTWARE MANAGEMENT RENAISSANCE Page 11 of 187 Chapter 1 CONVENTIONAL SOFTWARE MANAGEMENT

Many of the metrics are gross generalizations, yet they accurately describe some of the fundamental economic relationships that resulted from the conventional process practiced so far in the past. The following are the metrics in Boehms top 10 list: 1. Finding and fixing a software problem after delivery costs 100 times more than finding and fixing the problem in early design phases. This metric applies equally well for every dimension of process improvement. It equally well applies to any other process as for software development. 2. You can compress software development schedules 25% of nominal, but no more. One reason for this is: an N% reduction in schedule requires an M% (M > N) increase in human resources. This entails additional management overhead. Generally, the limit of flexibility in this overhead by scheduling concurrent activities, conserving sequential activities, and other resource constraints, is about 25%. For example, say optimally, a 100-staff-month effort may be achievable in 10 months by 10 people. Could the job be done in one month with 100 people? Two months with 50 people? These alternatives are unrealistic. The 25% compression metric says the limit here is 7.5 months requiring additional staff-months to the tune of 20. Any further schedule compression is doomed to fail. An optimal schedule could be extended arbitrarily and depending on the staff, could be performed in a much longer time with fewer human resources. 3. For every $1 spent on development, $2 is spent on maintenance. Boehm calls this the iron law of software development. Whether it is a long-lived commercial product requiring half-yearly upgrades, or a custom software system, twice as much money will be spent over the maintenance lifecycle than was spent in the development life-cycle. 4. Software development and maintenance costs are primarily a function of the number of source lines of code (SLOC).

This metric is more applicable to custom software development in the absence of commercial components, and lack of reuse as was the case in the conventional era. 5. Variations among people account for the biggest differences in software productivity. This is a key piece of conventional wisdom: Hire good people. When objective knowledge of the reasons for success or failure is not available, the obvious scapegoat is the quality of the people. This judgment is subjective and difficult to challenge. 6. The overall ratio of software to hardware costs is still growing. In 1955 it was 15:85, in 1985, 85:15 85% is about the level of functionality allocated to software in system solutions, and not just about software productivity. 7. Only 15% of software development effort is devoted to programming. This is an indicator of the need for balance among other activities besides coding like requirements management, design, testing, planning, project control, change management, and tool preparation and selection. 8. Software systems and products typically cost 3 times as much per SLOC as individual software programs. Software-system products system of systems cost 9 times as much. This exponential relationship is the essence of what is called diseconomy of scale. Unlike other commodities, as more software is built it will more expensive per SLOC. 9. Walkthroughs catch 60% of errors. Reading this with metric-1, walkthroughs though they catch 60% of errors are not catching the errors that matter and certainly not early enough in the life-cycle. All defects are not created equal. Human inspection methods like walkthroughs are good at catching surface problems and style issues. When ad hoc notations are used, human methods may be useful as a quality assurance method. For uncovering issues like resource contention, performance bottlenecks, control conflicts, and other higher order issues, human methods are not efficient.

PART I SOFTWARE MANAGEMENT RENAISSANCE Page 12 of 187 Chapter 1 CONVENTIONAL SOFTWARE MANAGEMENT 10. 80% of the contribution comes from 20% of the contributors. This is called the Pareto principle. This is true about any engineering discipline. This metric can be expanded into a more specific interpretation for software. 80% of the engineering is consumed by 20% of the requirements. 80% of the software cost is consumed by 20% of the components. 80% of the errors are caused by 20% of the components. 80% of the software scrap and rework is caused by 20% of the errors. 80% of the resources are consumed by 20% of the components. 80% of the engineering is accomplished by 20% of the tools. 80% of the progress is made by 20% of the people. These relationships provide good benchmarks for evaluating process improvements and technology improvements. They represent rough rules of thumb to characterize the performance of the conventional software management process and technologies, objectively. Questions on this chapter: 1. Describe the five possible improvements to the waterfall model. 2. List out the symptoms of ill-managed software processes using the conventional models. 3. Explain Boehms Industrial Software Metrics Top 10 List. Difficult words: Scrap waste/refuse amass collect over time embryonic early in life-cycle deploy install Concise brief and clear shoe-horn removing roughness Fragile delicate adversarial with opposition Faade deceptive face PART I SOFTWARE MANAGEMENT RENAISSANCE Page 13 of 187 Chapter 2 EVOLUTION OF SOFTWARE ECONOMICS Software engineering is dominated by intellectual activities focused on solving problems of high complexity with numerous unknowns in competing points of view. The early software approaches in 1960s and 1970s can be described as craftsmanship, with each project using a custom process and custom tools. In the 1980s and 1990s, the software industry matured and transitioned to more an

engineering discipline. Most software projects in this era were primarily research-intensive, dominated by human creativity and diseconomies of scale. The current generation of software processes is driving toward a more productionintensive approach dominated by automation and economies of scale.

2.1 SOFTWARE ECONOMICS

Software cost models can be abstracted into a function of five basic parameters: Size Process Personnel Environment Required quality The size of the end product in human-generated components quantified in terms of the number of source instructions or the number of function points required developing the required functionality. The process used to produce the end product, in particular the ability of the process to avoid non-value-adding activities (rework, bureaucratic delays, and communications overhead). The capabilities of software engineering personnel and particularly their experience with the computer science issues and the applications domain issues of the project. The environment, which is made up of the tools and techniques available to support efficient software development and to automate the process. The required quality of the product, including its features, performance, reliability, and adaptability. The relationships among these parameters and the estimated cost can be written as: Effort = (Personnel) (Environment) (Quality) (Size Process) This is an abstracted form of all parametric models for estimating software costs. The relationship between effort and size exhibits a diseconomy of scale in most of the current software cost models. This diseconomy of scale is a result of the process exponent being greater than 1.0.

Contrary to general manufacturing processes, the more software is built, the more expensive it is per unit item. For example, for a given application, a 10,000-line software solution will cost less per line than 1,00,000-line software solution. How much less? Assume that the 1,00,000-line system requires 900 staff-months for development, or about 111 lines per staff-month, or 1.37 hours per line. And for the 10,000-line software system the estimate would be: 62 staffmonths, or about 175 lines per staff-month, or 0.87 hour per line. The pre-line cost for the smaller application is less than for the larger application. The reason: the complexity of managing interpersonal communications as the number of team members scales up. Figure 2-1 shows three generations of basic technology advancement in tools, components, and processes. Assuming the levels of quality and personnel are constant, the y-axis refers to software unit costs like per SLOC, FP, or Component, etc. The x-axis represents the life cycle of the software development. The three generations of software development are defined as conventional, transition, and modern practices. PART I SOFTWARE MANAGEMENT RENAISSANCE Page 14 of 187 Chapter 2 EVOLUTION OF SOFTWARE ECONOMICS Technologies for environment automation, size reduction, and process improvement are not independent of one another. In each era, the key is complementary growth in all technologies. For example, the process advances couldnt be used successfully without new component technologies and increased tool automation. The use of modern practices and the promise of improved economics are not always guaranteed. Organizations are achieving better economics of scale in successive technology phases. This is because of the multiplicity of similar projects being very large in size (systems of

systems), and most of them being long-lived products. Figure 2-2 provides an overview of how a ROI profile can be achieved in subsequent efforts across life cycles of different domains. Software ROI Target objective: Improved ROI Cost Software Size - 1960s-1970s - Waterfall model - Functional design - Diseconomy of scale - 1980s-1990s - Process improvement - Encapsulation-based - Diseconomy of scale - 2000 and on - Iterative development - Component-based - Return on investment Corresponding environment, size, and process technologies Conventional Transition Modern Practices Environments/tools: Custom Environments/tools: Off-the-shelf, separate Environments/tools: Off-the-shelf, integrated Size: 100% custom Size: 30% component-based 70% Custom Size: 70% component-based 30% Custom Process: Ad hoc Process: Repeatable Process: Managed/measured Typical project performance Predictably bad Unpredictable Predictable Always: Infrequently: Usually:

On budget On schedule On budget On schedule Over budget Over schedule FIGURE 2-1. Three generations of software economics leading to the target objective PART I SOFTWARE MANAGEMENT RENAISSANCE Page 15 of 187 Chapter 2 EVOLUTION OF SOFTWARE ECONOMICS

2.2 PRAGMATIC SOFTWARE COST ESTIMATION

A critical problem in software cost estimation is the non-availability of case studies of projects that used iterative development process. The existing tools all claim their suitability but cant quote from empirical studies about their success. Further, in the software industry there are no standard or consistent metrics or atoms of units of measure. So the available data cannot be used for comparison purposes. It is difficult to collect a homogeneous set of data for the projects in an organization and it is much more difficult to collect data across organizations with different processes, languages, domains, and so on. The fundamental unit of size SLOC is counted differently across the industry. Modern languages like Ada 95 and JAVA dont make a simple definition of a source line reportable by the compiler. The exact definition of a FP or a SLOC is not very important, as long as everyone importantly uses the same definition. Three topics of interest in the debate among developers and vendors in terms of cost estimation are: 1. Which cost estimation model to use 2. Whether to use SLOC or FP as a measure 3. What constitutes a good estimate Investment in common architecture, process, and environment for all lineofbusiness systems First

system Second system Achieving ROI across line of business Line-of-Business Life Cycle: Successive Systems Investment in robust architecture, iterative process, and process automation First iteration Second iteration Achieving ROI across a project with multiple iterations Line-of-Business Life Cycle: Successive Iterations Nth iteration Investment in product architecture, lifecycle release process, and process automation First release Second release Software ROI Achieving ROI across a life cycle of product releases Line-of-Business Life Cycle: Successive Releases Nth release Software ROI Software ROI FIGURE 2-2. Return on investment in different domain Nth system PART I SOFTWARE MANAGEMENT RENAISSANCE Page 16 of 187 Chapter 2 EVOLUTION OF SOFTWARE ECONOMICS Among all the commercial cost estimation models available COCOMO, Ada COCOMO, and COCOMO II are the most open and well-documented models. With reference to the measurement of software size: there are basically two points of view: SLOC and FP. [The third one an ad hoc point of view by immature developers uses no systematic measurement of size.] Many software experts feel SLOC to be a poor measure of size.

Generally, people are comfortable with mass figures like 1,000 lines of code rather than with a description like 20 function points, 6 classes, 5 use cases, 4 object points, 6 files, 2 subsystems, 1 component, or 6,000 bytes. Even with a description of the later type, people tend to ask for the corresponding SLOC quantity. So SLOC is still relevant both as a measure and also with people. Today, language advance and the use of components, automatic source code generation, and OOP have made SLOC a much more ambiguous measure. The use of function points (FP) has a large following. The primary advantage of using FP is that it is independent of technology. So it is a better primitive unit for comparisons among projects and organizations. A disadvantage with FP is that the primitive definitions are abstract and measurements are not easily derived directly from the evolving artifacts. Anyone doing cross-project or cross-organization comparisons should use FP as the measure of size. FP is also a more accurate estimate in the early phases of a project life cycle. In later phases, SLOC becomes a more useful and precise measurement basis of various metrics perspectives. The general accuracy of conventional cost models like COCOMO is described as within 20% of actuals, 70% of the time. This level of unpredictability in the conventional software development process is frightening, as missing schedules/costs is more common in the process. This is an interesting phenomenon when scheduling labor-intensive efforts. Unless specific incentives are provided for beating the deadline, projects rarely fare as planned. Teams and individuals perform sub-planning to meet their objectives. If the time objective is lenient, they spend their energy elsewhere or go after more-thanrequired quality. People, generally, by nature, do not propose to accelerate a schedule.

Even if some of them propose to accelerate, it will meet with resistance from others who have to synchronize their activities. So, plans need to be as ambitious as can possibly be achieved. Most real-world use of cost models is bottom-up (substantiating a target cost) rather than top-down (estimating the should cost). Figure 2-3 illustrates the predominant practice. The predominant practice: The software project manager defines the target cost of the software, then manipulates the parameters and sizing until the target cost can be justified. The rationale for the target cost may be to win a proposal solicit customer funding attain internal corporate funding or to achieve some other goal. PART I SOFTWARE MANAGEMENT RENAISSANCE Page 17 of 187 Chapter 2 EVOLUTION OF SOFTWARE ECONOMICS The above practice is absolutely necessary to analyze the cost risks, and to understand the sensitivities and trade-offs objectively. It provides scope to the project manager to examine the risks associated with achieving the target costs, and to discuss this information with other stakeholders. It results in various combinations in the plans, designs, process, or scope being proposed. This process provides a platform for a basis of estimate and an overall cost analysis. Independent cost estimates done by people independent of the development team are generally inaccurate. A credible estimate can be produced by a competent team consisting of the software project manager, and the software architecture, development, and test managers iteratively preparing several estimates and sensitivity analyses. Such a team, ultimately, takes the ownership of that cost estimate for the project to succeed. A good software cost estimate will have the following attributes: It is conceived and supported by the project manager, architecture team, development

team, and test team accountable for performing the work. It is accepted by all stakeholders as ambitious and realizable. It is based on a well-defined software cost model with a credible basis. It is based on a database of relevant project experience that includes similar processes, similar environments, similar quality requirements, and similar people. It is defined in enough detail so that its key risk areas are understood and the probability of success is objectively assessed. Achieving all the attributes in one estimate is rarely possible. A good estimate can be achieved in a straightforward manner in later life-cycle phases of a mature project using a mature process. Questions on this chapter: 1. Describe the evolution of the software economics. 2. Show how in the third generation of software evolution the target object was achieved? 3. Explain how ROI is achieved across different problem domains over the generations of software evolution. 4. Explain (a) which cost estimation model to use, (b) which software metric to use for cost estimation, and (c) attributes of a good cost estimate. Software manager, software architecture manager, software development manager, software assessment manager Cost modelers Cost estimate Risks, options, trade-offs, alternatives This project must cost $X to win this business Here is how justify that cost FIGURE 2-3. The predominant cost estimation process PART I SOFTWARE MANAGEMENT RENAISSANCE Page 18 of 187 Chapter 3 IMPROVING SOFTWARE ECONOMICS Improvements in the economics of software development are not only difficult to achieve but also difficult to measure and substantiate.

Focus on improving only one aspect of software development process will not realize any significant economic improvement even though it improves the one aspect spectacularly. The key to substantial improvement is a balanced focus across interrelated dimensions. The important dimensions are structured around the five basic parameters of the software model: 1.Reducing the size or complexity of the product to be developed 2.Improving the development process 3.Using more-skilled personnel and better teams 4.Using better environments tools to automate the process 5.Trading off or backing off on quality thresholds These parameters are given in priority order for most software domains: TABLE 3-1. Important trends in improving software economics COST MODEL PARAMETERS TRENDS Size Abstraction and component-based development technologies Higher order languages (C++, Ada95, Java, VB, etc.) Object-oriented analysis, design, programming) Reuse Commercial components Process Methods and techniques Iterative development Process maturity models Architecture-first development Acquisition reform Personnel People factors Training and personnel skill development Teamwork Win-win cultures Environment Automation technologies and tools Integrated tools (visual modeling, compiler, editor, debugger, change management, etc.) Open systems Hardware platform performance Automation of coding, documents, testing, analyses Quality Performance, reliability, accuracy Hardware platform performance

Demonstration-based assessment Statistical quality control The above table lists some of the technology developments, process improvement efforts, and management approaches targeted at improving the economics of software development and integration. There are significant dependencies among these trends. Tools enable size reduction and process improvements. Size reduction approaches lead to process changes. Process improvements drive tool requirements. In the domain of user interface software, a decade earlier, development teams had to spend extensive time analyzing operations, human factors, screen layout, and screen dynamics all on paper as committing designs was very expensive. So it was heavy workload in the initial stages in the form of paper artifacts which had to be frozen after taking the user concurrence and the high construction costs could be minimized. Today, graphical user interface (GUI) technology tools enable a new and different process. A matured GUI technology has made the conventional process obsolete. GUI tools enable the developers to construct an executable user interface faster and at less cost. Paper descriptions are no more necessary, resulting in better efficiency. Operations analysis and human factors analysis still relevant are carried out in a realistic target environment using existing primitives and building blocks. PART I SOFTWARE MANAGEMENT RENAISSANCE Page 19 of 187 Chapter 3 IMPROVING SOFTWARE ECONOMICS Engineering and feedback cycles now take only a few days/weeks a great reduction from months of time required earlier. Further, the old process could not afford re-runs. Designs were done completely after thorough analysis and design in one construction cycle.

The new GUI process is geared to take the user interface through a few realistic versions, incorporating user feedback all along the way. It also achieves a stable understanding of the requirements and the design issues in balance with one another. The ever-increasing advances in the hardware technology also have been influencing the software technology improvements. The availability of higher CPU speeds, more memory, and more network bandwidth has eliminated many complexities. Simpler, brute-force solutions are now possible all this because of advances in hardware technology.

3.1 REDUCING SOFTWARE PRODUCT SIZE

Producing a product that achieves design goals with minimum amount of humangenerated source material is the most significant way to improve return on investment (ROI) and affordability. Component-based development is the way for reducing the source language size. Reuse, OO technology, automatic code generation, and higher-order programming languages are all focused on achieving a system with fewer lines of human-specified source directive/statements. This size reduction is the primary motivation behind improvements in higher order languages like C++, Ada 95, Java, V Basic, and 4GLs automatic code generators CASE tools, visual modeling tools, GUI builders reuse of commercial components OSs, windowing environments, DBMSs, middleware, networks object-oriented technologies UML, visual modeling tools, architecture frameworks. There is one limitation in this type of code/size reduction: Apparently, this recommendation comes from a simple observation: code that isnt there need not be developed and cant break. This is not entirely the case. When size-reducing technologies are used, they reduce the number of human-generated source lines.

All of them tend to increase the amount of computer-executable code. So this negates the second part of the observation. Mature and reliable size reduction technologies are powerful at producing economic benefits. Immature technologies may reduce the development size but require more investment in achieving required levels of quality and performance. This may have a negative impact on overall project performance.

3.1.1 LANGUAGES

Universal function points (UFPs) are useful metrics for languageindependent early lifecycle estimates. UFPs are used to indicate the relative program sizes required to implement a given functionality. The basic units of FPs are external user inputs external outputs, internal logical data groups, external data interfaces, and external inquiries. PART I SOFTWARE MANAGEMENT RENAISSANCE Page 20 of 187 Chapter 3 IMPROVING SOFTWARE ECONOMICS SLOC metrics are useful as estimators after a candidate solution is formulated and an implementation language is known. Substantial data is documented relating SLOC to FPs as shown below: TABLE 3-2. Language expressiveness of some of the popular languages LANGUAGE SLOC PER UFP Assembly 320 C 128 FORTRAN 77 105 COBOL 85 91 Ada 83 71 C++ 56 Ada 95 55 Java 55 Visual Basic 35 Visual Basic: useful for building simple interactive applications, not useful for real-time, embedded programs. Ada 95: useful for mission critical real-time applications, not useful for parallel, scientific,

and highly number-crunching applications on higher configurations. Data such as this spanning application domains, corporations, and technology generations should be interpreted and used with great care. Two observations within the data concern the differences and relationships between Ada 83 and Ada 95, and C and C++. The difference in expressiveness between the two versions of Ada is mainly due to the features added to support OOP. The difference between the two versions of C is more profound. C++ incorporated several of the advanced features of Ada with more support for OOP. C++ was developed as a superset of C. This has its pros and cons. The C compatibility made it easy for C programmers to migrate to C+ +. On the downside, a number of C++ compiler users were programming in C, so the expressiveness of the OOP based C++ was not being exploited. The evolution of Java eliminated many of the problems in the C++ language. It conserves the OO features and adds further support for portability and distribution. UFPs can be used to indicate the relative program sizes required to implement a given functionality. For example, to achieve a given application with a fixed number of function points, one of the following program sizes would be required: 10,00,000 lines of assembly language 4,00,000 lines of C 2,20,000 lines of Ada 83 1,75,000 lines of Ada 95 or C++ Reduction in the size of human-generated code, in turn reduces the size of the team and the time needed for development. Adding a commercial DBMS, a commercial GUI builder, and a commercial middleware can reduce the effective size of development to the following final size: 75,000 line of Ada 95 or C++ with integration of several commercial components The use of the highest level language and appropriate commercial components has a sizable impact on cost particularly when it comes to large projects which have higher

life-cycle cost. Generally, simpler is better: reducing size increases understandability, changeability, and reliability. The data in the table illustrate why modern languages like C++, Ada 95, Java, and Visual Basic are more preferred. Their level of expressiveness is attractive. There is risk of misuse in applying the data in the table. This data is a precise average of several imprecise numbers. Each language has a domain of usage. The values indicate the relative expressive power of various languages. Commercial components and code generators can further reduce the size of human-generated code. PART I SOFTWARE MANAGEMENT RENAISSANCE Page 21 of 187 Chapter 3 IMPROVING SOFTWARE ECONOMICS But, the higher level abstraction technologies tend to degrade performance, increase resource consumption. These drawbacks, mostly, can be overcome by hardware performance improvements and optimization. These improvements may not be as effective in embedded systems.

3.1.2 OBJECT-ORIENTED METHODS AND VISUAL MODELING

The later part of 1990s has seen a shift towards OO technologies. Studies have concluded that OO programming languages appear to benefit both software productivity and software quality, but an economic benefit has yet to be demonstrated. One reason for the lack of this proof could be the high cost of training in OO design methods like the UML. OO technology provides more formalized notations for capturing and visualizing software abstractions. This has an impact in reducing the overall size of the product to be developed. Grady Booch proposed three other reasons for the success of the OO projects. These are

good examples of the interrelationships among the dimensions of improving software economics: 1.An OO model of the problem and its solution encourages a common vocabulary between the end users of a system and its developers, thus creating a shared understanding of the problem being solved. This is an example how the use of OO technology improves teamwork and interpersonal communications. 2.The use of continuous integration creates opportunities to recognize risk early and make incremental corrections without destabilizing the entire development effort. This aspect of OO technology enables an architecture-first process in which integration is an early and continuous life-cycle activity. 3.OO architecture provides a clear separation of concerns among disparate elements of a system, creating firewalls that prevent a change in one part of the system from rending the fabric of the entire architecture. This feature is crucial to the supporting languages and environments to implement OO architectures. Booch also summarized five characteristics of a successful OO project: 1.A ruthless focus on the development of a system that provides a well understood collection of essential minimal characteristics. 2.The existence of a culture that is centered on results, encourages communication, and yet is not afraid to fail. 3.The effective use of OO modeling. 4.The existence of a strong architectural vision. 5.The application of a well-managed iterative and incremental development life cycle. OO methods, notations, and visual modeling provide strong technology support for the process framework.

3.1.3 REUSE

Reusing existing components and building reusable components have been natural software engineering activities along with the improvements in programming languages.

Software design methods implicitly dealt with reuse in order to minimize development costs while achieving all the other required attributes of performance, feature set, and quality. Reuse should be treated as a routine part of achieving a return on investment. Common architectures, common processes, precedent experience, and common environments are all instances of reuse. An obstacle to reuse has been fragmentation of languages, operating systems, notations, machine architectures, tools and standards. PART I SOFTWARE MANAGEMENT RENAISSANCE Page 22 of 187 Chapter 3 IMPROVING SOFTWARE ECONOMICS Microsofts success on the PC platform is a counter example, to prove the point of fragmentation being detrimental. Reuse basically takes place for economic reasons. So the amount of money made/saved would be a good metric for identifying whether a set of components is truly reusable. Reuse components offered by organizations lacking in motivation, trustworthiness, and accountability for quality, support, improvements, and usability are suspect. Truly reusable components of value are transitioned to commercial products supported by organization with the following characteristics: 1.They have an economic motivation for continued support. 2. They take ownership of improving product quality, adding new features, and transitioning to new technologies. 3.They have a sufficiently broad customer base to be profitable. The cost of developing a reusable component is not trivial. Figure 3-1 examines the economic trade-offs. The steep initial curve shows the economic difficulty to developing reusable components. Unless the objective is to support reuse across many projects, a convincing business case cannot be made. Organizations have to make it mainline business of selling reusable components, for it to be a fit business case.

To succeed in marketing commercial components, an organization needs three enduring elements: a development group, a support infrastructure, and a product-oriented sales and marketing infrastructure. The complexity and cost of developing reusable components should not be underestimated. Reuse is an important discipline that has an impact on the efficiency of all workflows and the quality of most artifacts.

3.1.4 COMMERCIAL COMPONENTS

Nowadays the approach being pursued is to maximize integration of commercial components and off-the-shelf products. The use of commercial components is desirable for reducing custom development. But it is not a straightforward practice. Table 3-3 lists some of the advantages and disadvantages of using commercial components. The trade-offs are particularly acute in mission-critical domains. 1 project solution: $N and M months 2 project solution: 50% more cost and 100% more time Many project solution: operating with high value per unit investment, typical of commercial products Development Cost and Schedule Resources Number of Projects Using Reusable Components 5 project solution: 125% more cost and 150% more time FIGURE 3-1. Cost and schedule investments necessary to achieve reusable components PART I SOFTWARE MANAGEMENT RENAISSANCE Page 23 of 187 Chapter 3 IMPROVING SOFTWARE ECONOMICS Since the trade-offs have global effects on quality, cost, and supportability, the selection of commercial components over development of custom components has significant impact on a projects overall architecture. TABLE 3-3. Advantages and disadvantages of commercial components versus custom software APPROACH ADVANTAGES DISADVANTAGES Commercial

components Predictable license costs Broadly used mature technology Available now Dedicated support organization Hardware/software independence Rich in functionality Frequent upgrades Up-front license fees Recurring maintenance fees Dependency on vendor Run-time efficiency sacrifices Functionality constraints Integration not always trivial No control over upgrades and maintenance Unnecessary features that consume extra resources Often inadequate reliability and stability Multiple-vendor incompatibilities Custom development Complete change freedom Smaller, and often simpler implementations Often better performance Control of development and enhancement Expensive unpredictable development Unpredictable availability date Undefined maintenance model Immature and fragile Single-platform dependency Drain on expert resources The paramount message here is: these decisions must be made early in the life cycle as part of the architectural design.

3.2 IMPROVING SOFTWARE PROCESSESProcess is an overloaded term.

For software-oriented organizations there are many processes and subprocesses. The main and distinct process perspectives are: Metaprocess: an organizations policies, procedures, and practices for pursuing a software-intensive line of business. The focus of this process is on organizational economics, long-term strategies, and a software ROI. Macroprocess: a projects policies, procedures, and practices for producing a complete software product within certain cost, schedule, and quality constraints. The focus of the macroprocess is on creating an adequate instance of the metaprocess for a specific set of constraints. Microprocess: a project teams policies, procedures, and practices for achieving an artifact of the software process. The focus of the microprocess is on achieving an intermediate product baseline with adequate quality and adequate functionality as economically and rapidly as possible. Although these three levels of process overlap somewhat, they have different objectives, audiences, metrics, concerns, and time scales. These are shown in Table 3-4. TABLE 3-4. Three levels of process and their attributes ATTRIBUTES METAPROCESS MACROPROCESS MICROPROCESS Subject Line of business Project Iteration Objectives Line-of-business profitability Competitiveness Project profitability Risk management Project budget, schedule, quality Resource management Risk resolution Milestone budget, schedule, quality Audience Acquisition authorities, customers Organizational management Software project managers Software engineers

Sub-project managers Software engineers Metrics Project predictability Revenue, market share On budget, on schedule Major milestone success On budget, on schedule Major milestone progress PART I SOFTWARE MANAGEMENT RENAISSANCE Page 24 of 187 Chapter 3 IMPROVING SOFTWARE ECONOMICS Project scrap and rework Release/iteration scrap and rework Concerns Bureaucracy Vs standardization Quality Vs financial performance Content Vs schedule Time scales 6 to 12 months 1 to many years 1 to 6 months The macroprocess is the project-level process that affects the cost estimation model. All project processes consist of productive activities and overhead activities. For a project to be successful a complex web of sequential and parallel steps are required. As the scale of project increases the complexity of the web also increases. To manage the complexity of the web, overhead steps also have to be included. Productive activities result in tangible progress toward the endproduct. In a software project the productive activities include: prototyping, modeling, coding, debugging, and user-documentation. Overhead activities that have an intangible effect are required in plan

preparation, documentation, progress monitoring, risk assessment, financial assessment, configuration control, quality assessment, integration, testing, late scrap and rework, management, personnel training, business administration, and other such tasks. Overhead activities include many value-added efforts. Less effort devoted to these overhead activities implies more effort can be expended for the productive activities. The objective of process improvement is to maximize the allocation of resources to productive activities and minimize the impact of overhead on resources such as personnel, computers, and schedule. Scrap and rework arise at two stages in the life cycle: ones during the regular development as a by-product of prototyping efforts. This is a productive necessity to resolve the unknowns in the solution space. If the scrap and rework called late arise, in the second stage, during late lifecycle it is highly undesirable. Personnel training can be viewed from two perspectives: One, as an organizational responsibility, Two, as a project responsibility. Training the people on the project in processes, technologies or tools is going to add to the project overheads. When the personnel are already trained for the project, before they are on to the project, then the project will be immensely benefited in the form of saving of time, and resources. The quality of the software process strongly affects the required effort and thereby the schedule for producing the software product. The difference between a good process and a bad one will affect overall cost estimates by 50% to 100%. Reduction in effort will improve the overall schedule. So a better process can have a greater effect in reducing the time it will take for the team to achieve the product vision with the required quality. The reasons for this improvement: Schedule improvement has three dimensions:

1.We could take an N-step process and improve the efficiency of each step. 2.We could take an N-step process and eliminate some steps so that it is now only an Mstep process. 3.We could take an N-step process and use more concurrency in the activities being performed or the resources being applied. Time-to-market or last-minute improvement strategies emphasize the first dimension. There is a great potential in focusing on improving the 2nd and the 3rd dimension. PART I SOFTWARE MANAGEMENT RENAISSANCE Page 25 of 187 Chapter 3 IMPROVING SOFTWARE ECONOMICS The primary focus of process improvement should be on achieving an adequate solution in the minimum number of iterations and eliminating as much downstream scrap and rework as possible. Every instance of rework introduces a sequential set of task to be redone. Suppose a team completes the sequential steps of analysis, design, coding, and testing of a feature, and then uncovers a design flaw in testing. Now a sequence of redesign, recode, and retest is required. These task sequences are the primary obstacle to schedule compression. So, the primary impact of process improvement should be the reduction of scrap and rework late in the life-cycle phases. In a perfect software engineering world with an immaculate problem description, an obvious solution space, a development team of experienced personnel, adequate resources, and stakeholders with common goals a software development process can be executed in one iteration with negligible scrap and rework. But we work in an imperfect world and we need to manage engineering activities so that scrap and rework profiles do not impact the win conditions of any stakeholders. This is the background for most process improvements.

3.3 IMPROVING TEAM EFFECTIVENESS

Differences in personnel account for the greatest swings in productivity. The original COCOMO model suggests that the combined effects of personnel skill and experience can have an impact on productivity of as much as a factor of four. So there is a difference between a team of amateurs and a team of experts. In practice, it is risky to assess a given team as being off-scale in either direction. For a large team it is almost always possible to end up with nominal people and experience. Any team with all geniuses with lot of experience, high IQ it may turn to be dysfunctional. So instead of just hire good people, it should just formulate a good team. Two most important aspects of an excellent team are: balance and coverage. When a team is out of balance, it is vulnerable. For example, a football team need for diverse skills. So is it with a software development team. Teamwork is more important than the sum of the individuals. With software teams, a project manager should configure a balance of talent with highly skilled people in the leverage positions. Some maxims of team management are: A well-managed project can succeed with a nominal engineering team. A mismanaged project even with an expert team will almost never succeed. A well-architected system can be built by a nominal team of software builders. A poorly architected system will flounder even with an expert team. Boehms five staffing principles: 1.The principle of top talent: Use better and fewer people. There is natural team size for most jobs, and being grossly over or under this size is counter-productive for team dynamics as it results in too little or too much pressure on individuals to perform. 2.The principle of job matching: Fit the task to the skills and motivation of the people available. PART I SOFTWARE MANAGEMENT RENAISSANCE Page 26 of 187

Chapter 3 IMPROVING SOFTWARE ECONOMICS With software engineers, it is difficult to discriminate the intangible (or immeasurable) personnel skills and optimal task allocations. Personal agendas (likes and dislikes) also complicate assignments. On software teams it is common for talented programmers to seek promotions to architects and managers. The skill sets required for architects and managers are quite different. Most talented programmers are innately unqualified to be architects and managers, and vice versa. Yet, individuals and organizations view such promotions as desirable. Sometimes, it is a loosing proposition either way: promote someone not fully qualified, it results in a lousy situation; do not promote someone desirous, it results in the person not performing to capacity. 3. The principle of career progression: An organization does best in the long run by helping its people to self-actualize. Good performers usually self-actualize in any environment. Organizations can help and hinder employee self-actualization. Organizational energy benefits average and below-average performers the most. Organizational training programs are strategic with educational value. Project training is purely tactical, intended to be useful and applied with immediate effect. 4. The principle of team balance: Select people who will complement and harmonize with one another. Software team balance has many dimensions: Raw skills: intelligence, objectivity, creativity, organization, analytical thinking Psychological makeup: leaders and followers, risk takers and conservatives, visionaries and nitpickers, cynics and optimists Objectives: financial, feature set, quality, timeliness When a team is unbalanced in any one of the dimensions, a project becomes risky. Balancing a team is a paramount factor in good team work. 5. The principle of phase-out: Keeping a misfit on the team doesnt benefit anyone. A misfit demotivates other team members, will not self-actualize, and disrupts the team

balance in some dimension. Misfits are obvious, and it is never right to procrastinate weeding them out. Software development is a team effort. Managers must nurture a culture of teamwork and results rather than individual accomplishments. Of the five principles, team balance and job matching should be the primary objectives. The top talent and phase out principles are secondary objectives as they are applied within the context of team balance. Though career progressions needs to be addressed as an employment practice, individuals or organizations that stress it over the success of the team will not last long in the marketplace. Software project managers need certain leadership qualities to enhance team effectiveness. Some of the crucial attributes are: 1.Hiring skills. Placing the right person in the right job is obvious, but is hard to achieve. 2.Customer-interface skill. A prerequisite for success is the avoidance of adversarial relationships among stakeholders. 3.Decision-making skill. A decisive person only can have a clear sense of direction, and only such a person can direct others. So indecisiveness is not a characteristic for a manager to be successful. 4.Team-building skill. Teamwork requires that manager establish trust, motivate progress, exploit eccentric skilled persons, transition average people into top performers, eliminate misfits, and consolidate diverse opinions into a team direction. 5.Selling skill. Successful mangers must sell all stakeholders on decisions and priorities, sell candidates on job positions, sell changes to the status quo in the face of resistance, and sell achievements against objectives. PART I SOFTWARE MANAGEMENT RENAISSANCE Page 27 of 187 Chapter 3 IMPROVING SOFTWARE ECONOMICS

In practice selling requires continuous negotiation, compromise, and empathy.

3.4 IMPROVING AUTOMATION THROUGH SOFTWARE ENVIRONMENTS

The tools and environment used in the software process have a linear effect on the productivity of the process. Planning tools, requirements management tools, visual modeling tools, compilers, editors, debuggers, quality assurance analysis tools, test tools, and user interfaces provide support for automating the evolution of software engineering artifacts. Configuration management environments provide the foundation for executing and instrumenting the process. The isolated impact of tools and automation allows improvements of 20 to 40% in effort. When tools and environments are viewed as the primary delivery vehicle for process automation and improvement, their impact can be higher. Process improvements reduce scrap and rework eliminating steps and minimizing the number of iterations in the process. Process improvement also increases the efficiency of certain steps in the process. This is done primarily by the environment by automating manual tasks that are inefficient or error-prone. The transition to a mature software process introduces new challenges and opportunities for management control of concurrent activities , and for tangible progress and quality assessment. An environment that provide semantic integration in which the environment understands the detailed meaning of the development artifacts, and process automation can improve productivity, improve software quality, and accelerate the adoption of modern techniques. An environment that supports incremental compilation, automated system builds, and integrated regression testing can provide rapid turnaround for iterative development and allow development teams to iterate more freely.

An important emphasis of a modern approach is to define the development and maintenance environment as a first-class artifact of the process. A robust, integrated development environment must support the automation of the development process. This environment should include requirements management, document automation, host/target programming tools, automated regression testing, continuous and integrated change management, and feature/defect tracking. A common thread in successful software projects is they hire good people and provide them with good tools to accomplish their jobs. Automation of the design process provides payback in quality, the ability to estimate costs and schedules, and overall productivity using a smaller team. Integrated toolsets play an increasingly important role in incremental/iterative development by allowing the designers to traverse quickly among development artifacts and keep them up-to-date. Round-trip engineering is a term used to describe the key capability of environments that support iterative development. Different information repositories are maintained for the engineering artifacts. Automation support is needed to ensure efficient and error-free transition of data from one artifact to another. PART I SOFTWARE MANAGEMENT RENAISSANCE Page 28 of 187 Chapter 3 IMPROVING SOFTWARE ECONOMICS Round-trip engineering describes the environment support needed to change an artifact freely and have other artifacts automatically changed so that consistency is maintained among the entire set of requirements, design, implementation, and deployment artifacts. Forward engineering is the automation of creation of one engineering artifact from another, more abstract representation. For example: compilers and linkers provide automated transition of source code into executable code.

Reverse engineering is the generation or modification of a more abstract representation from an existing artifact. Example: creating a visual design model from a source code representation. With the use of heterogeneous components, platforms, and languages in architectures, there is an increase in the complexity of building, controlling, and maintaining large-scale webs of components. This increased complexity necessitated configuration control and automation of build management. The present environments support for automation is not to the expected extent. Example: automated test case construction from use case and scenario descriptions has not yet evolved to support beyond trivial cases like unit test scenarios. While describing the economic improvements associated with tools and environments, tool vendors make relatively accurate individual assessments of lifecycle activities to support their claims of economic benefits. For example: Requirements analysis and evolution activities consume 40% of lifecycle costs. Software design activities have an impact on more than 50% of software development effort and schedule. Coding and unit testing consume about 50% of software development effort and schedule. Test activities can consume as much as 50% of a projects resources. Configuration control and change management are critical activities that can consume as much as 25% of resources of large-scale projects. Documentation activities can consume more than 30% of project engineering resources. Project management, business administration, and progress assessment can consume as much as 30% of project budgets. Taken individually, each of the above claims is correct. Taken collectively, it takes 275% of budget and schedule resources to complete most

projects!!!!!!!!!!!!!!!!!!!! Look at a misleading conclusion: This testing tool improves testing productivity by 20%. Because test activities consume 50% of the life cycle, there will be a 10% net productivity gain to the entire project. With a $1 million budget, it is affordable to spend $1,00,000 on test tools. Such simple assertions are not reasonable, given the complex interrelationships among the software development activities and the tools. The combined effect of all tools tends to be less than about 40%, and most this benefit cant be gained without some change in the process. So an individual tool can improve a projects productivity by about 5%. In general, it is better to normalize claims to the virtual 275% total than the 100% total we deal with in the real world.

3.5 ACHIEVING REQUIRED QUALITY

The best practices derived from development process and technologies improve cost efficiency and in addition impact improvements in quality for the same cost. PART I SOFTWARE MANAGEMENT RENAISSANCE Page 29 of 187 Chapter 3 IMPROVING SOFTWARE ECONOMICS Some dimensions of quality improvement are: TABLE 3-5. General quality improvements with a modern process

QUALITY DRIVER CONVENTIONAL PROCESSES MODERN ITERATIVE PROCESSES

Requirements misunderstanding Discovered late Resolved early Development risk Unknown until late Understood and resolved early Commercial components Mostly unavailable Still a quality driver, but trade-offs must be resolved early in the life-cycle Change management Late in the life cycle, chaotic and malignant Early in the life cycle, straight-forward and benign Design errors Discovered late Resolved early Automation Mostly error-prone, manual procedures Mostly automated, error-free evolution

of artifacts Resource adequacy Unpredictable Predictable Schedules Over-constrained Tunable to quality, performance, and technology Target performance Paper-based analysis or separate simulation Executing prototypes, early performance feedback, quantitative understanding Software process rigor Document-based Managed, measured, and toolsupported Key practices that improve overall software quality include the following: 1.Focusing on driving requirements and critical use cases early in the life cycle on requirements completeness and traceability late in the life cycle throughout the life cycle on a balance between requirements evolution, design evolution, and plan evolution. 2.Using metrics and indicators to measure the progress and quality of an architecture as it evolves from a high-level prototype into a fully compliant product. 3.Providing integrated life-cycle environments that support early and continuous configuration control, change management, rigorous design methods, document automation, and regression test automation. 4.Using visual modeling and higher level languages that support architectural control, abstraction, reliable programming, reuse, and self-documentation. 5.Early and continuous insight into performance issues through demonstration-based evaluations. When projects incorporate mixtures of commercial components and custom-developed components, it is important to have an insight into run-time performance issues. Conventional development processes stressed early sizing and timing estimates of computer program resource utilization. The typical chronology of events in performance assessment is: a) Project inception. The proposed design is asserted to be low risk with adequate performance margin. b) Initial design review. Optimistic assessments of adequate design margin are based on

paper analysis or rough simulation of the critical threads. The actual application algorithms and database sizes are fairly well understood. The infrastructure operating system overhead, database management overhead, and the interprocess and network communication overhead and all the secondary threads are typically misunderstood. c) Mid-life-cycle design review. The assessments started whittling away at the margins, as early benchmarks and initial tests began exposing the optimist inherent in earlier estimates. PART I SOFTWARE MANAGEMENT RENAISSANCE Page 30 of 187 Chapter 3 IMPROVING SOFTWARE ECONOMICS d) Integration and test. Serious performance problems are uncovered, necessitating fundamental changes in the architecture. Though the infrastructure was blamed, the real culprit is immature use of the infrastructure, immature architectural solutions, or poorly understood early design trade-offs. This sequence was the result of early performance insight based solely on nave engineering judgment of innumerable criteria. A demonstration-based approach can provide significantly more accurate assessments of performance issues, particularly in large-scale distributed systems composed of many interacting components. Early performance issues are typical and healthy as they expose architectural flaws or weaknesses in commercial components early in the life cycle when the right trade-offs can be made.

3.6 PEER INSPECTIONS: A PRAGMATIC VIEW

Peer inspections/reviews are valuable as secondary mechanisms. Comparatively the following primary quality mechanisms/parameters are more useful contributors to quality and also they should be emphasized in the management process: 1.Transitioning engineering information from one artifact set to another. Using this information, and assessing the consistency, feasibility, understandability, and technology constraints inherent in the engineering artifacts.

2.Major milestone demonstrations to assess the artifacts against tangible criteria in the context of relevant use cases. 3.Environment tools compilers, debuggers, analyzers, automated test suites that ensure representation rigor, consistency, completeness, and change control. 4.Life-cycle testing for detailed insight into critical trade-offs, acceptance criteria, and requirements compliance. 5.Change management metric for objective insight into multipleperspective change trends and convergence or divergence from quality and progress goals. In certain cases inspections provide a significant return. One such value is: in the professional development of a team. It is generally useful to have the products of the junior team members reviewed by senior mentors. This way of exchanging the products of amateurs and the seniors accelerates the acquisition of knowledge and skill in new personnel. Gross blunders can be caught and feedback can be properly channeled, thus reducing the chances of perpetuation of bad practices. Another value is: holding authors accountable for quality products. All authors of software and documentation should have their products scrutinized as a natural by-product of the process. The coverage of inspection should be across authors rather than across components. Junior authors can learn while scrutinizing the work of the seniors. Varying levels of informal inspection are performed continuously when developers read or integrate software with another authors software, and during testing by independent test teams. This inspection is more tangibly focused on integrated and executable aspects of the system. Any critical component should be inspected by several stakeholders in its quality, performance, or feature set. Inspection focused on resolving an existing issue can help in effectively determining the cause or finding a resolution for a determined cause.

Many organizations overemphasize meeting and formal inspections, and require coverage across all engineering products. This approach can be counterproductive. Only 20% of technical artifacts use cases, design models, source code, and test cases deserve detailed scrutiny compared with other, more useful quality assurance activities. A process whose quality assurance emphasis is on inspections will not be cost-effective. PART I SOFTWARE MANAGEMENT RENAISSANCE Page 31 of 187 Chapter 3 IMPROVING SOFTWARE ECONOMICS Significant or substantial design errors or architecture issues are rarely obvious unless the inspection is narrowly focused on a particular issue. Most inspections are superficial. When systems are highly complex, with innumerable components, concurrent execution, distributed resources, and other equally demanding dimensions of complexity, it is very difficult to comprehend the dynamic interacti9ons within a software system under some simple use cases. So, random human inspections tend to degenerate into comments on style and first-order semantic issues. They rarely result in the discovery of real performance bottlenecks, serious control issue like deadlocks, race conditions, or resource contentions, or architectural weakness like flaws in scalability, reliability, or interoperability. Architectural issues are exposed only through more rigorous engineering activities like: Analysis, prototyping, or experimentation Constructing design models Committing the current state of the design model to an executable implementation Demonstrating the current implementation strengths and weaknesses in the context of critical subsets of the sue cases and scenarios Incorporating lessons learned back into the models, use cases, implementations, and plans

Architectural quality achievement is inherent in an iterative process that evolves the artifact sets together in balance. The checkpoints along the way are numerous, including human review and inspections focused on critical issues. Focusing a large percentage of a projects resources on human inspections is bad practice and only perpetuates the existence of low-value-added box checkers who have no stake in the projects success. Quality assurance is everyones responsibility and should be integral to almost all process activities instead of a separate discipline performed by quality assurance specialists. Questions on this chapter: 1. The key to substantial improvement of software economics is a balanced attack across several interrelated dimensions. Comment in detail. 2. Explain how reducing software product size contributes to the improvement of software economics. 3. Explain Boochs reasons for the success of object-oriented projects. Clearly bring out the interrelationships among the dimensions of improving software economics. 4. Explain the relative advantages and disadvantages of using commercial components versus custom software. 5. Explain how software economics is improved by improving software processes. 6. Explain how improvement of team effectiveness contributes to software economics. 7. Explain Boehms staffing principles. 8. Explain how software environments help in improving automation as a way of improving software economics. 9. Explain the key practices that improve overall software quality, in view of the general quality improvements with a modern process in comparison with that of conventional processes. 10. Comment on the relative merits and demerits of peer inspections for quality assurance.

Difficult words: caveat caution trivial small/inconsequential PART I SOFTWARE MANAGEMENT RENAISSANCE Page 32 of 187 Chapter 4 THE OLD WAY AND THE NEW A significant reengineering of the software development process is taking place in the form of the conventional management and technical practices being replaced by new approaches combining success themes with advances in software engineering technology. This transition is motivated by the insatiable demand for more software features produced more rapidly under more competitive pressure to reduce cost. In the commercial software industry, the combination of competitive pressures profitability diversity of customers, and rapidly changing technology have driven organizations to adopt new management approaches. Many systems required a new management paradigm to respond to budget pressures, the dynamic and diverse threat environment, the long operational lifetime of systems, and the predominance of large-scale, complex applications.

4.1 THE PRINCIPLES OF CONVENTIONAL SOFTWARE ENGINEERINGAfter years of software development experience, the software industry formulated a number of principles. The following is a description of todays software engineering principles as a benchmark for future ideas: [This description is from Davis book that enumerates 201 principles. The top 30 principles are described here.] 1. Make quality #1. Quality must be quantified and mechanisms put into place to motivate its achievement. It is not easy to define quality at the outset of the project. A modern process framework strives to understand the trade-offs among features, quality, cost, and schedule as early in the life cycle as possible.

This understanding must be achieved to specify or manage the achievement of quality. 2. High-quality software is possible. Techniques that have been demonstrated to increase quality include involving the customer, prototyping, simplifying design, conducting inspections, and hiring the best p