UNIT I Original

42
 1 UNIT I SOFTWA RE PROCESS AND PROJECT MANAGEMENT Introdu ctio n to Sof twar e Eng inee rin g Soft war e Proce!! Per! "ec ti#e and S"ec ia$i %ed Pr oce!! Mod e$ ! & Soft war e Pr o' ec t Manage(ent) E! ti (at ion & *OC and FP +a!ed E!ti(ation COCOMO Mode$ &  Project Scheduling – Scheduling, Earned Value Analysis - Risk Management. INTRODUCTION TO SOFTWARE ENGINEERING Comuter so!t"are is the roduct that so!t"are ro!essionals #uild and then suort o$er the long term. %t encomasses rograms that e&ecute "ithin a comuter o! any si'e and architecture, content that is resented as the comuter rograms e&ecute, and descriti$e in!ormation in #oth hard coy an d $irtual !orms that encomass $irtually any electronic media. So!t"are engineering encomasses a rocess, a collection o! methods (ractice) and an array o! tools that allo" ro!essionals to #uild high *uality comuter so!t"are. So!t "are engi neer s #uil d and suor t so!t "a re , and $irtuall y e$er yo ne in th e industriali'ed "orld uses it either directly or indirectly. T,E NATURE OF SOFTWARE So!t"are takes on a dual role. Product Ve hicle !or d eli$ering a roduct As a roduct, it deli$ers the comuting otential em#odied #y comuter hard"are or more #roadly, #y a net"ork o! comuters that are accessi#le #y local hard"are. As the $ehicle used to deli$er the roduct, so!t"are acts as the #asis !or the control o! the comuter (oerating systems), the communication o! in!ormation (net"orks), and the creation and control o! other rograms (so!t"are tools and en $ironments). So!t"are deli$ers the most imortant roduct o! our time+ infor(ation-  o %t trans!orms ersonal data (e.g., an indi$iduals !inancial transactions) so that the data can #e more use!ul in a local conte&t o it manages #usiness in!ormation to enhance cometiti$eness o %t ro$ides a gate"ay to "orld"ide in!ormation net"orks (e.g., the %nternet), and  ro$ides the means !or ac*uiring in!ormation in all o! its ! orms. SOFTWA RE. Definition Software i!) () %nstructions (comuter rograms) that "hen e&ecuted ro$ide desired !eatures, !unction, and er!ormance (/) 0ata structures that ena#le the rograms to ade*u ately maniulate in!ormation, and (1) 0es cri ti $e in! ormati on in #ot h har d coy and $ir tua l !or ms tha t descri #es the oeration and use o! the rograms. SOFTWA RE &C/aracteri!tic! . So!t"are is de$ eloed or e nginee red it i s not manu!actu red in the cl assi cal sense.

description

software engineering unit 1 notes in detail

Transcript of UNIT I Original

40

UNIT ISOFTWARE PROCESS AND PROJECT MANAGEMENTIntroduction to Software Engineering, Software Process, Perspective and Specialized Process Models Software Project Management: Estimation LOC and FP Based Estimation, COCOMO Model Project Scheduling Scheduling, Earned Value Analysis - Risk Management.

INTRODUCTION TO SOFTWARE ENGINEERING Computer software is the product that software professionals build and then support over the long term. It encompasses programs that execute within a computer of any size and architecture, content that is presented as the computer programs execute, and descriptive information in both hard copy and virtual forms that encompass virtually any electronic media. Software engineering encompasses a process, a collection of methods (practice) and an array of tools that allow professionals to build high quality computer software. Software engineers build and support software, and virtually everyone in the industrialized world uses it either directly or indirectly.THE NATURE OF SOFTWARE Software takes on a dual role. Product Vehicle for delivering a product As a product, it delivers the computing potential embodied by computer hardware or more broadly, by a network of computers that are accessible by local hardware. As the vehicle used to deliver the product, software acts as the basis for the control of the computer (operating systems), the communication of information (networks), and the creation and control of other programs (software tools and environments). Software delivers the most important product of our timeinformation. It transforms personal data (e.g., an individuals financial transactions) so that the data can be more useful in a local context; it manages business information to enhance competitiveness; It provides a gateway to worldwide information networks (e.g., the Internet), and provides the means for acquiring information in all of its forms.SOFTWARE- DefinitionSoftware is: (1) Instructions (computer programs) that when executed provide desired features, function, and performance; (2) Data structures that enable the programs to adequately manipulate information, and (3) Descriptive information in both hard copy and virtual forms that describes the operation and use of the programs.SOFTWARE Characteristics1. Software is developed or engineered; it is not manufactured in the classical sense.2. Software doesnt wear out.

Fig1 Fig2Fig1: The relationship, often called the bath-tub curve, indicates that hardware exhibits relatively high failure rates early in its life (these failures are often attributable to design or manufacturing defects); defects are corrected and the failure rate drops to a steady-state level (hopefully, quite low) for some period of time. As time passes, however, the failure rate rises again as hardware components suffer from the cumulative effects of dust, vibration, abuse, temperature extremes, and many other environmental maladies. Stated simply, the hardware begins to wear out.Fig2: The failure rate curve for software should take the form of the idealized curve. Undiscovered defects will cause high failure rates early in the life of a program. However, these are corrected and the curve flattens as shown. The idealized curve is a gross oversimplification of actual failure models for software. However, software doesnt wear out.3. Although the industry is moving toward component-based construction, most software continues to be custom built.A software component should be designed and implemented so that it can be reused in many different programs. Modern reusable components encapsulate both data and the processing that is applied to the data, enabling the software engineer to create new applications from reusable parts.

SOFTWARE APPLICATION DOMAINS:There are the seven broad categories of computer software challenges for software engineers.1. System software collection of programs written to service other programs. Some system software (e.g., compilers, editors, and file management utilities) processes complex, but determinate, information structures.2. Application software stand-alone programs that solve a specific business need. Facilitates business operations or management/technical decision making.3. Engineering / scientific software has been characterized by number crunching algorithms. Applications range from astronomy to volcanology, from automotive stress analysis to space shuttle orbital dynamics, and from molecular biology to automated manufacturing.4. Embedded software resides within a product or system and is used to implement and control features and functions for the end user and for the system itself. Embedded software can perform limited and esoteric functions (e.g., key pad control for a microwave oven) or provide significant function and control capability.5. Product-line software designed to provide a specific capability for use by many different customers. Product-line software can focus on a limited and esoteric marketplace (e.g., inventory control products) or address mass consumer markets (e.g., word processing, spreadsheets, computer graphics, multimedia, entertainment, database management, and personal and business financial applications).6. Web applications called WebApps, this network-centric software category span a wide array of applications.7. Artificial Intelligence software makes use of non-numerical algorithms to solve complex problems that are not amenable to computation or straightforward analysis. Applications within this area include robotics, expert systems, pattern recognition (image and voice), artificial neural networks, theorem proving, and game playing.Legacy Software are older programs were developed decades ago and have been continually modified changes in business requirements and computing platforms.Legacy systems often evolve for one or more of the following reasons: The software must be adapted to meet the needs of new computing environments or technology. The software must be enhanced to implement new business requirements. The software must be extended to make it interoperable with other more modern systems or databases. The software must be re-architected to make it viable within a network environment.

SOFTWARE ENGINEERINGDefinition:Software Engineering: (1) The application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software; that is, the application of engineering to software. (2) The study of approaches as in (1).Software engineering is a layered technology The foundation for software engineering is the process layer. The software engineering process is the glue that holds the technology layers together and enables rational and timely development of computer software. Process defines a framework that must be established for effective delivery of software engineering technology. The software process forms the basis for management control of software projects and establishes the context in which technical methods are applied, work products (models, documents, data, reports, forms, etc.) are produced, milestones are established, quality is ensured, and change is properly managed Software engineering methods provide the technical how-tos for building software. Methods encompass a broad array of tasks that include communication, requirements analysis, design modeling, program construction, testing, and support. Software engineering methods rely on a set of basic principles that govern each area of the technology and include modeling activities and other descriptive techniques. Software engineering tools provide automated or semi-automated support for the process and the methods. When tools are integrated so that information created by one tool can be used by another, a system for the support of software development, called computer-aided software engineering, is established.

THE SOFTWARE PROCESS A process is a collection of activities, actions, and tasks that are performed when some work product is to be created. An activity strives to achieve a broad objective (e.g., communication with stakeholders) and is applied regardless of the application domain, size of the project, complexity of the effort, or degree of rigor with which software engineering is to be applied. An action (e.g., architectural design) encompasses a set of tasks that produce a major work product (e.g., an architectural design model). A task focuses on a small, but well-defined objective (e.g., conducting a unit test) that produces a tangible outcome. A process framework establishes the foundation for a complete software engineering process by identifying a small number of framework activities that are applicable to all software projects, regardless of their size or complexity. In addition, the process framework encompasses a set of umbrella activities that are applicable across the entire software process. A generic process framework for software engineering encompasses five activities:1. Communication: It is critically important to communicate and collaborate with the customer (and other stakeholders). The intent is to understand stakeholders objectives for the project and to gather requirements that help define software features and functions.2. Planning: A software project is a complicated journey, and the planning activity creates a map that helps guide the team as it makes the journey. The mapcalled a software project plandefines the software engineering work by describing the technical tasks to be conducted, the risks that are likely, the resources that will be required, the work products to be produced, and a work schedule.3. Modeling: You create a sketch of the thing so that youll understand the big picturewhat it will look like architecturally, how the constituent parts fit together, and many other characteristics. If required, you refine the sketch into greater and greater detail in an effort to better understand the problem and how youre going to solve it. A software engineer does the same thing by creating models to better understand software requirements and the design that will achieve those requirements.4. Construction: This activity combines code generation (either manual or automated) and the testing that is required uncovering errors in the code.

5. Deployment: The software (as a complete entity or as a partially completed increment) is delivered to the customer who evaluates the delivered product and provides feedback based on the evaluation. Software engineering process framework activities are complemented by a number of umbrella activities. In general, umbrella activities are applied throughout a software project and help a software team manage and control progress, quality, change, and risk. Typical umbrella activities include: Software project tracking and controlallows the software team to assess progress against the project plan and take any necessary action to maintain the schedule. Risk managementassesses risks that may affect the outcome of the project or the quality of the product. Software quality assurancedefines and conducts the activities required to ensure software quality. Technical reviewsassess software engineering work products in an effort to uncover and remove errors before they are propagated to the next activity. Measurementdefines and collects process, project, and product measures that assist the team in delivering software that meets stakeholders needs; can be used in conjunction with all other framework and umbrella activities. Software configuration managementmanages the effects of change throughout the software process. Reusability managementdefines criteria for work product reuse (including software components) and establishes mechanisms to achieve reusable components. Work product preparation and productionencompasses the activities required to create work products such as models, documents, logs, forms, and lists.Agile process models emphasize project agility and follow a set of principles that lead to a more informal (but, proponents argue, no less effective) approach to software process. These process models are generally characterized as agile because they emphasize maneuverability and adaptability. They are appropriate for many types of projects and are particularly useful when Web applications are engineered.SOFTWARE ENGINEERING PRACTICEThe Essence of Practice1. Understand the problem (communication and analysis). Who has a stake in the solution to the problem?That is, who are the stakeholders? What are the unknowns?What data, functions, and features are required to properly solve the problem? Can the problem be compartmentalized?Is it possible to represent smaller problems that may be easier to understand? Can the problem be represented graphically? Can an analysis model be created?2. Plan a solution (modeling and software design) Have you seen similar problems before? Are there patterns that are recognizable in a potential solution? Is there existing software that implements the data, functions, and features that are required? Has a similar problem been solved? If so, are elements of the solution reusable? Can sub problems be defined? If so, are solutions readily apparent for the sub problems? Can you represent a solution in a manner that leads to effective implementation? Can a design model be created?3. Carry out the plan (code generation). Does the solution conform to the plan? Is source code traceable to the design model? Is each component part of the solution provably correct? Have the design and code been reviewed, or better, have correctness proofs been applied to the algorithm?4. Examine the result for accuracy (testing and quality assurance). Is it possible to test each component part of the solution? Has a reasonable testing strategy been implemented? Does the solution produce results that conform to the data, functions, and features that are required? Has the software been validated against all stakeholder requirements?General Principles The First Principle: The Reason It All Exists: A software system exists for one reason: to provide value to its users. The Second Principle: KISS (Keep It Simple, Stupid!): All design should be as simple as possible, but no simpler. The Third Principle: Maintain the Vision: A clear vision is essential to the success of a software project. The Fourth Principle: What You Produce, Others Will Consume: always specify, design, and implement knowing someone else will have to understand what you are doing. The Fifth Principle: Be Open to the Future: Never design yourself into a corner. Always ask what if, and prepare for all possible answers by creating systems that solve the general problem, not just the specific one. The Sixth Principle: Plan Ahead for Reuse: Reuse saves time and effort. The Seventh principle: Think!: Placing clear, complete thought before action almost always produces better results.PERSPECTIVE AND SPECIALIZED PROCESS MODELSA Generic Process Model: A process was defined as a collection of work activities, actions, and tasks that are performed when some work product is to be created. Each of these activities, actions, and tasks reside within a framework or model that defines their relationship with the process and with one another. Each software engineering action is defined by a task set that identifies the work tasks that are to be completed, the work products that will be produced, the quality assurance points that will be required, and the milestones that will be used to indicate progress. It defines five framework activitiescommunication, planning, modeling, construction, and deployment. In addition, a set of umbrella activitiesproject tracking and control, risk management, quality assurance, configuration management, technical reviews, and othersare applied throughout the process. This aspect called process flow describes how the framework activities and the actions and tasks that occur within each framework activity are organized with respect to sequence and time.

PROCESS FLOW TYPES A linear process flow executes each of the five framework activities in sequence, beginning with communication and culminating with deployment (Figure 2.2a). An iterative process flow repeats one or more of the activities before proceeding to the next (Figure 2.2b). An evolutionary process flow executes the activities in a circular manner. Each circuit through the five activities leads to a more complete version of the software (Figure 2.2c). A parallel process flow (Figure 2.2d) executes one or more activities in parallel with other activities (e.g., modeling for one aspect of the software might be executed in parallel with construction of another aspect of the software).

PERSPECTIVE PROCESS MODELS: ( very important )

Prescriptive process models were originally proposed to bring order to the chaos of software development.

The Waterfall Model: The waterfall model,sometimes called the classic life cycle, suggests a systematic, sequential approach to software development that begins with customer specification of requirements and progresses through planning, modeling, construction, and deployment, culminating in ongoing support of the completed software.

A variation in the representation of the waterfall model is called the V-model. The V-model depicts the relationship of quality assurance actions to the actions associated with communication, modeling, and early construction activities. As a software team moves down the left side of the V, basic problem requirements are refined into progressively more detailed and technical representations of the problem and its solution. Once code has been generated, the team moves up the right side of the V, essentially performing a series of tests (quality assurance actions) that validate each of the models created as the team moved down the left side. The V-model provides a way of visualizing how verification and validation actions are applied to earlier engineering work

The waterfall model is the oldest paradigm for software engineering. PROBLEMS: Real projects rarely follow the sequential flow that the model proposes. Although the linear model can accommodate iteration, it does so indirectly. As a result, changes can cause confusion as the project team proceeds. It is often difficult for the customer to state all requirements explicitly. The waterfall model requires this and has difficulty accommodating the natural uncertainty that exists at the beginning of many projects. The customer must have patience. A working version of the program(s) will not be available until late in the project time span. Finally, it can serve as a useful process model in situations where requirements are fixed and work is to proceed to completion in a linear manner.Incremental Process Models: The incremental model combines elements of linear and parallel process flows. The incremental model applies linear sequences in a staggered fashion as calendar time progresses. Each linear sequence produces deliverable increments of the software in a manner that is similar to the increments produced by an evolutionary process flow. EXAMPLE: For example, word-processing software developed using the incremental paradigm might deliver basic file management, editing, and document production functions in the first increment; More sophisticated editing and document production capabilities in the second increment; spelling and grammar checking in the third increment; and advanced page layout capability in the fourth increment. It should be noted that the process flow for any increment can incorporate the prototyping paradigm. When an incremental model is used, the first increment is often a core product. That is, basic requirements are addressed but many supplementary features remain undelivered. The core product is used by the customer . As a result of use and/or evaluation, a plan is developed for the next increment. The plan addresses the modification of the core product to better meet the needs of the customer and the delivery of additional features and functionality. This process is repeated following the delivery of each increment, until the complete product is produced. The incremental process model focuses on the delivery of an operational product with each increment. Early increments are stripped-down versions of the final product, but they do provide capability that serves the user and also provide a platform for evaluation by the user. Incremental development is particularly useful when staffing is unavailable for a complete implementation by the business deadline that has been established for the project. Early increments can be implemented with fewer people. If the core product is well received, then additional staff (if required) can be added to implement the next increment. In addition, increments can be planned to manage technical risks. Evolutionary Process Models: Evolutionary models are iterative. They are characterized in a manner that enables you to develop increasingly more complete versions of the software. There are two common evolutionary process models. Prototyping Spiral Model

PROTOTYPING: Often, a customer defines a set of general objectives for software, but does not identify detailed requirements for functions and features. In other cases, the developer may be unsure of the efficiency of an algorithm, the adaptability of an operating system, or the form that human-machine interaction should take. In these, and many other situations, a prototyping paradigmmay offer the best approach. Prototyping can be used as a stand-alone process model, it is more commonly used as a technique that can be implemented within the context of any one of the process models. The prototyping paradigm assists you and other stakeholders to better understand what is to be built when requirements are fuzzy. Steps: The prototyping paradigm begins with communication. You meet with other stakeholders to define the overall objectives for the software, identify whatever requirements are known, and outline areas where further definition is mandatory. A prototyping iteration is planned quickly, and modeling occurs. A quick design focuses on a representation of those aspects of the software that will be visible to end users (e.g., human interface layout or output display formats). The quick design leads to the construction of a prototype. The prototype is deployed and evaluated by stakeholders, who provide feedback that is used to further refine requirements. Iteration occurs as the prototype is tuned to satisfy the needs of various stakeholders, while at the same time enabling you to better understand what needs to be done. Ideally, the prototype serves as a mechanism for identifying software requirements. If a working prototype is to be built, we can make use of existing program fragments or apply tools (e.g., report generators and window managers) that enable working programs to be generated quickly.

PROBLEMS: Stakeholders see what appears to be a working version of the software, unaware that the prototype is held together haphazardly, unaware that in the rush to get it working you havent considered overall software quality or long-term maintainability. When informed that the product must be rebuilt so that high levels of quality can be maintained, stakeholders cry foul and demand that a few fixes be applied to make the prototype a working product. As a software engineer, you often make implementation compromises in order to get a prototype working quickly. An inappropriate operating system or programming language may be used simply because it is available and known; an inefficient algorithm may be implemented simply to demonstrate capability. After a time, you may become comfortable with these choices and forget all the reasons why they were inappropriate. The less-than-ideal choice has now become an integral part of the system. THE SPIRAL MODEL: The spiral model is an evolutionary software process model that couples the iterative nature of prototyping with the controlled and systematic aspects of the waterfall model. It provides the potential for rapid development of increasingly more complete versions of the software. According to BOEHMs, the spiral development model is a risk-driven process model generator that is used to guide multi-stakeholder concurrent engineering of software intensive systems. It has two main distinguishing features. One is a cyclic approach for incrementally growing a systems degree of definition and implementation while decreasing its degree of risk. The other is a set of anchor point milestones for ensuring stakeholder commitment to feasible and mutually satisfactory system solutions. Using the spiral model, software is developed in a series of evolutionary releases. During early iterations, the release might be a model or prototype. During later iterations, increasingly more complete versions of the engineered system are produced. A spiral model is divided into a set of framework activities defined by the software engineering team. STEPS: The first circuit around the spiral might result in the development of a product specification; subsequent passes around the spiral might be used to develop a prototype and then progressively more sophisticated versions of the software. Each pass through the planning region results in adjustments to the project plan. Cost and schedule are adjusted based on feedback derived from the customer after delivery. In addition, the project manager adjusts the planned number of iterations required to complete the software. The spiral model can be adapted to apply throughout the life of the computer software. Therefore, the first circuit around the spiral might represent a concept development project that starts at the core of the spiral and continues for multiple iterations. If the concept is to be developed into an actual product, the process proceeds outward on the spiral and a new product development project commences. The new product will evolve through a number of iterations around the spiral. Later, a circuit around the spiral might be used to represent a product enhancement project. The spiral model is a realistic approach to the development of large-scale systems and software. Because software evolves as the process progresses, the developer and customer better understand and react to risks at each evolutionary level. The spiral model uses prototyping as a risk reduction mechanism. It maintains the systematic stepwise approach suggested by the classic life cycle.

CONCURRENT MODELS: The concurrent development model,sometimes called concurrent engineering,allows a software team to represent iterative and concurrent elements of any of the process models. The activitymodelingmay be in any one of the states noted at any given time. Similarly, other activities, actions, or tasks (e.g., communicationor construction) can be represented in an analogous manner. All software engineering activities exist concurrently but reside in different states. For example, early in a project the communication activity has completed its first iteration and exists in the awaiting changes state. The modeling activity which existed in the inactive state while initial communication was completed, now makes a transition into the under development state. If, however, the customer indicates that changes in requirements must be made, the modeling activity moves from the under development state into the awaiting changes state. Concurrent modeling defines a series of events that will trigger transitions from state to state for each of the software engineering activities, actions, or tasks. For example, during early stages of design, an inconsistency in the requirements model is uncovered. This generates the event analysis model correction, which will trigger the requirements analysis action from the done state into the awaiting changes state. Concurrent modeling is applicable to all types of software development and provides an accurate picture of the current state of a project. Each activity, action, or task on the network exists simultaneously with other activities, actions, or tasks. Events generated at one point in the process network trigger transitions among the states.

SPECIALIZED PROCESS MODELS : (very important)COMPONENT-BASED DEVELOPMENT Commercial off-the-shelf (COTS) software components, developed by vendors who offer them as products, provide targeted functionality with well-defined interfaces that enable the component to be integrated into the software that is to be built. The component-based development model incorporates many of the characteristics of the spiral model. It is evolutionary in nature, demanding an iterative approach to the creation of software. However, the component-based development model constructs applications from prepackaged software components. Modeling and construction activities begin with the identification of candidate components. These components can be designed as either conventional software modules or object-oriented classes or packages of classes. Steps to create components for your Software: Available component-based products are researched and evaluated for the application domain in question. Component integration issues are considered. Software architecture is designed to accommodate the components. Components are integrated into the architecture. Comprehensive testing is conducted to ensure proper functionality. The component-based development model leads to software reuse, and reusability provides software engineers with a number of measurable benefits. Your software engineering team can achieve a reduction in development cycle time as well as a reduction in project cost if component reuse becomes part of your culture.

THE FORMAL METHODS MODEL The formal methods model encompasses a set of activities that leads to formal mathematical specification of computer software. Formal methods enable you to specify, develop, and verify a computer-based system by applying a rigorous, mathematical notation. A variation on this approach, called cleanroom software engineering, is currently applied by some software development organizations. When formal methods are used during development, they provide a mechanism for eliminating many of the problems that are difficult to overcome using other software engineering paradigms. When formal methods are used during design, they serve as a basis for program verification and therefore enable you to discover and correct errors that might otherwise go undetected. The formal methods model offers the promise of defect-free software.

ASPECT-ORIENTED SOFTWARE DEVELOPMENT Regardless of the software process that is chosen, the builders of complex software invariably implement a set of localized features, functions, and information content. These localized software characteristics are modeled as components (e.g., objectoriented classes) and then constructed within the context of a system architecture. As modern computer-based systems become more sophisticated (and complex), certain concerns customer required properties or areas of technical interest span the entire architecture. Some concerns are high-level properties of a system (e.g., security, fault tolerance). Other concerns affect functions (e.g., the application of business rules), while others are systemic (e.g., task synchronization or memory management). When concerns cut across multiple system functions, features, and information, they are often referred to as crosscutting concerns. Aspectual requirements define those crosscutting concerns that have an impact across the software architecture. Aspect-oriented software development (AOSD), often referred to as aspect-oriented programming(AOP), is a relatively new software engineering paradigm that provides a process and methodological approach for defining, specifying, designing, and constructing aspectsmechanisms beyond subroutines and inheritance for localizing the expression of a crosscutting concern

SOFTWARE PROJECT MANAGEMENT:The management spectrum: Effective software project management focuses on the four Ps: People Stakeholders Team leaders Software team Agile team Product Software scope Problem decomposition/ partitioning or problem elaboration Process Melding the Product and the Process Process Decomposition Project(approaches) Start on the right foot Maintain momentum Track progress Make smart decisions Conduct a postmortem analysis The W5HH principle Why is the system being developed? All stakeholders should assess the validity of business reasons for the software work. Does the business purpose justify the expenditure of people, time, and money? What will be done? The task set required for the project is defined. When will it be done? The team establishes a project schedule by identifying When project tasks are to be conducted and when milestones are to be reached. Who is responsible for a function? The role and responsibility of each member of the software team is defined. Where are they located organizationally? Not all roles and responsibilities reside within software practitioners. The customer, users, and other stakeholders also have responsibilities. How will the job be done technically and managerially? Once product scope is established, a management and technical strategy for the project must be defined. How much of each resource is needed? The answer to this question is derived by developing estimates based on answers to earlier questions.

ESTIMATION: LOC AND FP ESTIMATION Accurate estimation of the problem size is fundamental to satisfactory estimation of other project parameters such as effort, time duration for completing the project and the total cost for developing the software. The size of the project is not depend on the number of bytes that the source code occupies but it depend on the byte size of the executable code The project size is a measure of the problem complexity in terms of the effort and time required to develop the product. Currently, two metrics are popularly being used to estimate size: Line Of Code (LOC) Function Point (FP)Lines of Code (LOC): LOC is the simplest among all metrics available to estimate project size. It is a popular metric. This metric measures the size of a project by counting the number of source instructions in the developed program. Obviously, while counting the number of source instructions, lines used for commenting the code and the header lines are ignored. Determining the LOC count at the end of a project is very simple. However, accurate estimation of the LOC count at the beginning of a project is very difficult. Project managers usually divide the problem into modules and each module into sub modules and so on. Until the size of the different leaf-level modules can be approximately predicted, To be able to predict the LOC count for the various leaf-level modules sufficiently accurately past experience in developing similar products is ver helpful. By y using the estimation of the lowest level modules, project managers arrive at the total size estimation.LOC as a measure of problem size ha several shortcomings LOC gives a numerical value of problem size that can vary widely with individual coding style different programmers lay out their code in different ways. Count the language token instead of lines of code in the program. But this situation does not improve even if language tokens are counted instead of lines of code. LOC is a measure of the coding activity alone. It computes the number of source lines in the final program. LOC measure correlates poorly with the quality and efficiency of the code. Larger code size does not necessarily imply better quality or higher efficiency. LOC metric penalizes use of higher-level programming languages , code reuse, etc., LOC metric measures the lexical complexity of a program and does not address the more issues of logical or structural complexities. It is very difficult to estimate LOC in the final product from the problem specification. The LOC count can only be accurately computed only after the code has been fully developed.

FUNCTION POINT METRIC (FP) FP metric was proposed by Albrecht. It overcomes many of the shortcomings of the LOC metric. One of the advantages of using the FP is that it can be used to easily estimate the size of a software product directly from the problem specification. This is in contrast to the LOC metric, where the size can be accurately determined only after the product has fully been developed. The conceptual idea behind the function point metric is that the size of a software product is directly dependent on the number of different functions or features it support. Each function when invoked reads some input data and transforms it to the corresponding output data. Thus, a computation of the number of input and output data values to a system gives some indication of the number of functions supported by the system. In addition, the number of basic functions that a software performs, the size is also dependent on the number of files and the number of interfaces. Interfaces refer to the different mechanisms that need to be supported for data transfer with other external systems.

FP is computed in three steps. Compute the unadjusted Function Point(UFP) UFP is refined to reflect the differences in the complexities of the different parameters of the expression for UFP computation. FP is computed by further refining to account for the specific characteristics of the project that can influence the development effort,UFP = (Number of inputs)*4+(Number of outputs)*5+(Number of inquiries)*4+(Number of files)*10+(Number of interfaces)*10

Number of inputs:Each data item input by the user is counted. Data inputs should be distinguished from user inquiries. Inquiries are user commands such as print- account-balance. Inquiries are counted separately. It must

Number of outputsThe output considered refer to reports printed, screen outputs, error messages produced, etc. while computing the number of outputs the individual data items within a report are not considered, but a set of related data items is counted as one output.

Number of inquiriesNumber of inquiries is the number of distinct interactive queries which can be made by the users. These inquiries are the user commands which require specific action by the system.

Number of filesEach logical file is counted. A logical file implies a group of logically related data. Thus, logical files include data structures and physical files.

Number of interfacesThe interfaces considered are the interfaces used to exchange information with other systems.Eg:- data files on tapes. Disk, communication links with other systems, etc.

COMPUTATION OF UFP: The complexity level of each of the parameter is graded as simple, average, or complex. Each input being computed as four function points, very simple inputs can be computed as three function points and complex inputs as six function points.

TypesSimpleAverageComplex

Input(I)346

Output(O)457

Inquiry(E)346

Number of Files(F)71015

Number of Interfaces5710

A technical complexity factor (TCF) for the project is computed and the TCF is multiplied with UFP to yield FP. It expresses the overall impact of various project parameters that can influence the development effort such as high transaction rates, response time requirements, scope of reuse, etc. Albrecht identified 14 parameters that can influence the development effort. Each of these 14 factors is assigned a value from 0(not present or no influence) to 6 (strong influence). The resulting numbers are summed, yielding the total degree of influence (DI).

Finally FP is given as the product of UFP and TCF.

FP=UFP*TCF

FEATURE POINT METRIC:

Disadvantages of function point: It does not take into account the algorithmic complexity of software. It implicitly assumes that the effort required to design and develop any two functionalities of the system It takes only the number of functions that the system supports into consideration without distinguishing the difficult of developing the various functionalities.

To overcome this problem, extension of the function point metric called feature point metric has been proposed.In Feature Point Metric, that incorporates algorithm complexity as an extra parameter. It ensures that the computed size using the feature point metric reflects the fact that the more is the complexity of a function, the greater is the effort required to develop it and therefore its size should be larger compared to simpler functions.

COCOMO MODEL

COnstructive COst estimation Model was proposed by Boehm, 1981. Any software development project can be classified into any one of the following three categories based on the development complexity ORGANIC SEMI DETACHED EMBEDDED To classify a product into the identified categories, Boehm requires us to consider not only the characteristics of the product but also those of the development team and development environment.CLASSIFICATION1. Organic:If the project deals with developing a well-understood application program, the size of the development team is reasonably small and the team members are experienced in developing similar types of project

2. Semidetached:If the development team consists of mixture of experienced and inexperienced staff, team members may have limited experience on related systems but may be unfamiliar with some aspect of the system being developed.

3. Embedded:If the software being developed is strongly coupled to complex hardware, or if strict regulation on the operational procedure exist.

According to Boehm, software cost estimation should be done through three stages: Basic COCOMO Intermediate COCOMO Complete COCOMO

Basic COCOMOThe basic COCOMO model gives an approximate estimate of the project parameters. The basic COCOMO estimation model is given by the following expressions.

Where KLOC is the estimated size of t he software product expressed in Kilo Lines of Code, a1, a2 , b1 , b 2 are constants for each category of software products, Tdev is the estimated time to develop the software, expressed in months, Effort is the total effort required to develop the software product, expressed in person months (PMs). The effort estimation is expressed in units of person-months (PM). It is the area under the person-month plot. It should be carefully noted that an effort of 100 PM does not imply that 100 persons should work for 1 month nor does it imply that 1 person should be employed for 100 months, but it denotes the area under the person-month curve.

According to Boehm, every line of source text should be calculated as one LOC irrespective of the actual number of instructions on that line. Thus, if a single instruction spans several lines (say n lines), it is considered to be nLOC. The values of a1, a2, b1, b2 for different categories of products (i.e. organic, semidetached, and embedded) as given by Boehm [1981] are summarized below. He derived the above expressions by examining historic al data collected from a large number of actual projects. Estimation of development effort For the three classes of software products, the formulas for estimating the effort based on the code size are shown below:

Estimation of development time For the three classes of software products, the formulas for estimating the development time based on the effort are given below:

From the effort estimation, the project cost can be obtained by multiplying the required effort by the manpower cost per month. But, implicit in this project cost computation is the assumption that the entire project cost is incurred on account of the manpower cost alone. In addition to manpower cost, a project would incur costs due to hardware and software required for the project and the company overheads for administration, office space, etc. It is important to note that the effort and the duration estimations obtained using the COCOMO model are called as nominal effort estimate and nominal duration estimate. The term nominal implies that if anyone tries to complete the project in a time shorter than the estimated duration, then the cost will increase drastically. But, if anyone completes the project over a longer period of time than the estimated, then there is almost no decrease in the estimated cost value.

Intermediate COCOMO The basic COCOMO model assumes that effort and development time are functions of the product size alone. However, a host of other project parameters besides the product size affect the effort required to develop the product as well as the development time. Therefore, in order to obtain an accurate estimation of the effort and project duration, the effect of all relevant parameters must be taken into account. The intermediate COCOMO model recognizes this fact and refines the initial estimate obtained using the basic COCOMO expressions by using a set of 15 cost drivers (multipliers) based on various attributes of software development. For example, if modern programming practices are used, the initial estimates are scaled downward by multiplication with a cost driver having a value less than 1. If there are stringent reliability requirements on the software product, this initial estimate is scaled upward. Boehm requires the project manager to rate these 15 different parameters for a particular project on a scale of one to three. Then, depending on these ratings, he suggests appropriate cost driver values which should be multiplied with the initial estimate obtained using the basic COCOMO. In general, the cost drivers can be classified as being attributes of the following items: Product: The characteristics of the product that are considered include the inherent complexity of the product, reliability requirements of the product, etc. Computer: Characteristics of the computer that are considered include the execution speed required, storage space required etc. Personnel: The attributes of development personnel that are considered include the experience level of personnel, programming capability, analysis capability, etc. Development Environment: Development environment attributes capture the development facilities available to the developers. An important parameter that is considered is the sophistication of the automation (CASE) tools used for software development. Complete COCOMO A major shortcoming of both the basic and intermediate COCOMO models is that they consider a software product as a single homogeneous entity. For example, some sub-systems may be considered as organic type, some semidetached, and some embedded. Not only that the inherent development complexity of the subsystems may be different, but also for some subsystems the reliability requirements may be high, for some the development team might have no previous experience of similar development, and so on. The complete COCOMO model considers these differences in characteristics of the subsystems and estimates the effort and development time as the sum of the estimates for the individual subsystems. The cost of each subsystem is estimated separately. This approach reduces the margin of error in the final estimate. The following development project can be considered as an example application of the complete COCOMO model. A distributed Management Information System (MIS) product for an organization having offices at several places across the country can have the following sub-components: Database part Graphical User Interface (GUI) part Communication part Of these, the communication part can be considered as embedded software. The database part could be semi-detached software, and the GUI part organic software. The costs for these three components can be estimated separately, and summed up to give the overall cost of the system.Conclusion: It is important to note that the effort and the duration estimations obtained using the COCOMO model are called as nominal effort estimate and nominal duration estimate. The term nominal implies that if anyone tries to complete the project in a time shorter than the estimated duration, then the cost will increase drastically. But, if anyone completes the project over a longer period of time than the estimated, then there is almost no decrease in the estimated cost value.

PROJECT SCHEDULING Software project scheduling is an action that distributes estimated effort across the planned project duration by allocating the effort to specific software engineering tasks. It is important to note, however, that the schedule evolves over time. During early stages of project planning, a macroscopic schedule is developed. This type of schedule identifies all major process framework activities and the product functions to which they are applied. As the project gets under way, each entry on the macroscopic schedule is refined into a detailed schedule. Here, specific software actions and tasks (required to accomplish an activity) are identified and scheduled. Scheduling for software engineering projects can be viewed from two rather different perspectives. In the first, an end date for release of a computer-based system has already (and irrevocably) been established. The software organization is constrained to distribute effort within the prescribed time frame. The second view of software scheduling assumes that rough chronological bounds have been discussed but that the end date is set by the software engineering organization. Effort is distributed to make best use of resources, and an end date is defined after careful analysis of the software.Project-task scheduling is an important project planning activity. It involves deciding which tasks would be taken up when. In order to schedule the project activities, a software project manager needs to do the following: 1. Identify all the tasks needed to complete the project. 2. Break down large tasks into small activities. 3. Determine the dependency among different activities. 4. Establish the most likely estimates for the time durations necessary to complete the activities. 5. Allocate resources to activities. 6. Plan the starting and ending dates for various activities. 7. Determine the critical path. A critical path is the chain of activities that determines the duration of the project.

Basic Principles: Compartmentalization. The project must be compartmentalized into a number of manageable activities and tasks. To accomplish compartmentalization, both the product and the process are refined. Interdependency. The interdependency of each compartmentalized activity or task must be determined. Some tasks must occur in sequence, while others can occur in parallel. Some activities cannot commence until the work product produced by another is available. Other activities can occur independently. Time allocation. Each task to be scheduled must be allocated some number of work units (e.g., person-days of effort). In addition, each task must be assigned a start date and a completion date that are a function of the interdependencies and whether work will be conducted on a full-time or part-time basis. Effort validation. Every project has a defined number of people on the software team. As time allocation occurs, you must ensure that no more than the allocated number of people has been scheduled at any given time. Defined responsibilities. Every task that is scheduled should be assigned to a specific team member. Defined outcomes. Every task that is scheduled should have a defined outcome. For software projects, the outcome is normally a work product (e.g., the design of a component) or a part of a work product. Work products are often combined in deliverables. Defined milestones. Every task or group of tasks should be associated with a project milestone. A milestone is accomplished when one or more work products has been reviewed for quality and has been approved.

The Relationship between People and Effort: In small software development project a single person can analyze requirements, perform design, generate code, and conduct tests. As the size of a project increases, more people must become involved. The Putnam-Norden-Rayleigh (PNR) Curve5 provides an indication of the relationship between effort applied and delivery time for a software project. The curve indicates a minimum value to that indicates the least cost for delivery (i.e., the delivery time that will result in the least effort expended). As we move left of to (i.e., as we try to accelerate delivery), the curve rises nonlinearly. As an example, we assume that a project team has estimated a level of effort Ed will be required to achieve a nominal delivery time td that is optimal in terms of schedule and available resources. In fact, the PNR curve indicates that the project delivery time cannot be compressed much beyond 0.75td. The PNR curve also indicates that the lowest cost delivery option, to = 2td. The implication here is that delaying project delivery can reduce costs significantly. The number of delivered lines of code (source statements), L, is related to effort and development time by the equation:

where E is development effort in person-months, P is a productivity parameter that reflects a variety of factors that lead to high-quality software engineering work (typical values for P range between 2000 and 12,000), and t is the project duration in calendar months. Rearranging this software equation, we can arrive at an expression for development effort E:

where E is the effort expended (in person-years) over the entire life cycle for software development and maintenance and t is the development time in years. The equation for development effort can be related to development cost by the inclusion of a burdened labor rate factor ($/person-year).Effort Distribution: Each of the software project estimation techniques leads to estimates of work units (e.g., person-months) required to complete software development. A recommended distribution of effort across the software process is often referred to as the 402040 rule. Forty percent of all effort is allocated to frontend analysis and design (40 %). You can correctly infer that coding (20 percent of effort) is deemphasized (20%). A similar percentage is applied to back-end testing (40 %) This effort distribution should be used as a guideline only. The characteristics of each project dictate the distribution of effort. Work expended on project planning rarely accounts for more than 2 to 3 percent of effort, unless the plan commits an organization to large expenditures with high risk. Customer communication and requirements analysis may comprise 10 to 25 percent of project effort. Effort expended on analysis or prototyping should increase in direct proportion with project size and complexity. A range of 20 to 25 percent of effort is normally applied to software design. Time expended for design review and subsequent iteration must also be considered. Because of the effort applied to software design, code should follow with relatively little difficulty. A range of 15 to 20 percent of overall effort can be achieved. Testing and subsequent debugging can account for 30 to 40 percent of software development effort. The criticality of the software often dictates the amount of testing that is required. If software is human rated (i.e., software failure can result in loss of life), even higher percentages are typical.

DEFINING A TASK SET FOR THE SOFTWARE PROJECT A task set is a collection of software engineering work tasks, milestones, work products, and quality assurance filters that must be accomplished to complete a particular project. The task set must provide enough discipline to achieve high software quality. But, at the same time, it must not burden the project team with unnecessary work. To develop a project schedule, a task set must be distributed on the project time line. The task set will vary depending upon the project type and the degree of rigor with which the software team decides to do its work.DEFINING A TASK NETWORK A task network, also called an activity network, is a graphic representation of the task flow for a project. It is sometimes used as the mechanism through which task sequence and dependencies are input to an automated project scheduling tool. The task network depicts major software engineering actions. The concurrent nature of software engineering actions leads to a number of important scheduling requirements. Because parallel tasks occur asynchronously, you should determine intertask dependencies to ensure continuous progress toward completion. In addition, you should be aware of those tasks that lie on the critical path. That is, tasks that must be completed on schedule if the project as a whole is to be completed on schedule.

SCHEDULING

Scheduling of a software project does not differ greatly from scheduling of any multitask engineering effort. Therefore, generalized project scheduling tools and techniques can be applied with little modification for software projects. Program evaluation and review technique (PERT) and the critical path method (CPM) are two project scheduling methods that can be applied to software development. Both techniques are driven by information already developed in earlier project planning activities which of the following are selected estimates of effort, a decomposition of the product function, the selection of the appropriate process model and task set, and decomposition of the tasks Interdependencies among tasks may be defined using a task network. Tasks, sometimes called the project work breakdown structure (WBS), are defined for the product as a whole or for individual functions. Both PERT and CPM provide quantitative tools that allow you to (1) Determine the critical paththe chain of tasks that determines the duration of the project, (2) Establish most likely time estimates for individual tasks by applying statistical models,(3) Calculate boundary times that define a time window for a particular task. Time-Line Charts When creating a software project schedule, you begin with a set of tasks (the work breakdown structure). If automated tools are used, the work breakdown is input as a task network or task outline. Effort, duration, and start date are then input for each task. In addition, tasks may be assigned to specific individuals. As a consequence of this input, a time-line chart, also called a Gantt chart, is generated. A time-line chart can be developed for the entire project. Alternatively, separate charts can be developed for each project function or for each individual working on the project.

It depicts a part of a software project schedule that emphasizes the concept scoping task for a word-processing (WP) software product. All project tasks (for concept scoping) are listed in the left-hand column. The horizontal bars indicate the duration of each task. When multiple bars occur at the same time on the calendar, task concurrency is implied. The diamonds indicate milestones. Tracking the Schedule The project schedule becomes a road map that defines the tasks and milestones to be tracked and controlled as the project proceeds. Tracking can be accomplished in a number of different ways: Conducting periodic project status meetings in which each team member reports progress and problems Evaluating the results of all reviews conducted throughout the software engineering process Determining whether formal project milestones have been accomplished by the scheduled date Comparing the actual start date to the planned start date for each project task listed in the resource table Meeting informally with practitioners to obtain their subjective assessment of progress to date and problems on the horizon Using earned value analysis to assess progress quantitatively In reality, all of these tracking techniques are used by experienced project managers. Control is employed by a software project manager to administer project resources, cope with problems, and direct project staff. If things are going well, control is light. But when problems occur, you must exercise control to reconcile them as quickly as possible. After a problem has been diagnosed, additional resources may be focused on the problem area: staff may be redeployed or the project schedule can be redefined. When faced with severe deadline pressure, experienced project managers sometimes use a project scheduling and control technique called Time-Boxing. The time-boxing strategy recognizes that the complete product may not be deliverable by the predefined deadline. Therefore, an incremental software paradigm is chosen, and a schedule is derived for each incremental delivery. The tasks associated with each increment are then time-boxed. This means that the schedule for each task is adjusted by working backward from the delivery date for the increment. A box is put around each task. When a task hits the boundary of its time box (plus or minus 10 percent), work stops and the next task begins.

EARNED VALUE ANALYSIS

It is reasonable to ask whether there is a quantitative technique for assessing progress as the software team progresses through the work tasks allocated to the project schedule. In fact, a technique for performing quantitative analysis of progress does exist. It is called earned value analysis (EVA) The earned value system provides a common value scale for every [software project] task, regardless of the type of work being performed. The total hours to do the whole project are estimated, and every task is given an earned value based on its estimated percentage of the total. It enables to assess the percent of completeness of a project using quantitative analysis. To determine the earned value, the following steps are performed: The budgeted cost of work scheduled (BCWS) is determined for each work task represented in the schedule. During estimation, the work (in person-hours or person-days) of each software engineering task is planned. Hence, BCWSi is the effort planned for work task i. To determine progress at a given point along the project schedule, the value of BCWS is the sum of the BCWSi values for all work tasks that should have been completed by that point in time on the project schedule. The BCWS values for all work tasks are summed to derive the budget at completion (BAC). Hence, Next, the value for budgeted cost of work performed (BCWP) is computed. The value for BCWP is the sum of the BCWS values for all work tasks that have actually been completed by a point in time on the project schedule. Given values for BCWS, BAC, and BCWP, important progress indicators can be computed:

SPI is an indication of the efficiency with which the project is utilizing scheduled resources. An SPI value close to 1.0 indicates efficient execution of the project schedule. SV is simply an absolute indication of variance from the planned schedule.

Provides an indication of the percentage of work that should have been completed by time t.

provides a quantitative indication of the percent of completeness of the project at a given point in time t. It is also possible to compute the actual cost of work performed (ACWP). The value for ACWP is the sum of the effort actually expended on work tasks that have been completed by a point in time on the project schedule. It is then possible to

A CPI value close to 1.0 provides a strong indication that the project is within its defined budget. CV is an absolute indication of cost savings (against planned costs) or shortfall at a particular stage of a project.

RISK MANAGEMENT

A risk is any anticipated unfavorable event or circumstance that can occur while a project is underway. Risk management aims at reducing the impact of all kinds that might affect a project. It consists of three essential activities: Risk identification Risk assessment Risk containment Risk Identification: The project manager needs to anticipate the risks in the project as early as possible so that the impact of the risks can be minimized by making effective risk management plans. So early risk identification is important. For examples, we might worried whether the vendors whom have asked to develop certain modules might not complete their work in time, whether they would turn in poor quality work, whether some of key personnel might leave the organization. All such risks that are affecting a project must be identified and listed. There are the three categories of risk which can affect a software project as follows: Project risks It concern various forms of budgetary, schedule, personnel, resource and customer- related problems. An important project risk is schedule slippage. Since software is intangible, it is very difficult to monitor and control a software project. Technical risks It concern potential design, implementation, interfacing, testing and maintenance problems. It also include ambiguous specification, incomplete specification, technical uncertainty and technical obsolescence. Most technical risk occur due to the development teams insufficient knowledge about the product. Business risks This type of risk include riskd of building an excellent product that no one wants, losing budgetary or personnel commitment

Risk Assessment: The objective is to rank the risks in terms of their damage causing potential. For risk assessment, first each risk should be rated in two ways: The likelihood of a risk coming true ( r ) The consequence of the problems associated with the risks (s) Based on these two factors, the priority of each risk can be computed: P=r*sWhere p is the priority with which the risk must be handled, r is the probability of the risk becoming true.S is the severity of damage caused due to thr risk becoming true. If all identified riskd are prioritized, then the most likely and damaging riskd can be handled first and more comprehensive risk abatement procedures can be designed for these risks. Risk Containment: After all the identified risks of a project are assessed, plans must be made to contain the most damaging and the most likely risks. Different risks require different containment procedures. In fact, most risks require ingenuity on the part of the project manager in tackling the risk. There are three main strategies to plan for risk containment: Avoid the risk: This may take several forms such as discussing with the customer to change the requirements to reduce the scope of the work, giving incentives to the engineers to avoid the risk of manpower turnover, etc. Transfer the risk: This strategy involves getting the risky component developed by a third party, buying insurance cover, etc. Risk reduction: This involves planning ways to contain the damage due to a risk. For example, if there is risk that some key personnel might leave, new recruitment may be planned. Risk leverage To choose between the different strategies of handling a risk, the project manager must consider the cost of handling the risk and the corresponding reduction of risk. For this the risk leverage of the different risks can be computed. Risk leverage is the difference in risk exposure divided by the cost of reducing the risk. More formally, risk leverage = (risk exposure before reduction risk exposure after reduction) / (cost of reduction)