quality_4. Establishing Process Performance Baselines & Models for statisical control of Software...

10
7/30/2019 quality_4. Establishing Process Performance Baselines & Models for statisical control of Software Project http://slidepdf.com/reader/full/quality4-establishing-process-performance-baselines-models-for-statisical 1/10 Proceedings of ASCNT-2011, CDAC, Noida, India, pp  Abstract: - Improving software development processes is a demanding and complex task. It  needs continuous improvement in defined processes. This paper proposes a data driven  approach for process improvement. The major components of this are: (i) Process identification for improvement, (ii) Data identification and collection for the process improvement, (iii) Identification of statistical methods/techniques for data analysis, (iv) Creation of model for process improvement.  Key Words: Process Performance Baselines, Process Performance Models, Data Analysis, Statistical Method 1. Introduction The metrics analysis is one of the required processes for the software development process of any CMMi high maturity organization. The first step is to identify the users for each metric. The user of the metric is the person, taking action based upon the metric. There are various types of user for a metrics program. This adds complexity to the program, because each user may have different information requirements. Table 1 gives the types of metrics users and their requirements. Table1: User types and their requirements S.No. User Type Requirement(s) 1. Top Management Interested in applying greater control to the software development process. Reducing risk and maximizing return on investment. 2. Project Managers Interested in being able to accurately predict project size, effort, resources, budgets and schedules. Interested in controlling the projects they are in charge of and communicating facts to their management. 3. Software Engineers/Programmers Interested in making informed decisions about their work and work products. 4. Test Managers/Testers Interested in finding as many new defects as possible in the time allocated to testing and in obtaining confidence that, the software works as specified. 5. Specialists Individuals performing specialized functions (e.g., Marketing, Software Quality Assurance, Software Configuration Management, Audits and Assessments). Interested in quantitative information upon which they can base their decisions, finding and recommendations 6. Customers/Users Interested in on-time delivery of high quality software products and in reducing the over-all cost of ownership. Establishing Process Performance Baselines and Models for Statistical Control of Software Projects  Umesh Kumar Mishra, K. Harihara sudhan, Shalu Gupta

Transcript of quality_4. Establishing Process Performance Baselines & Models for statisical control of Software...

Page 1: quality_4. Establishing Process Performance Baselines & Models for statisical control of Software Project

7/30/2019 quality_4. Establishing Process Performance Baselines & Models for statisical control of Software Project

http://slidepdf.com/reader/full/quality4-establishing-process-performance-baselines-models-for-statisical 1/10

Proceedings of ASCNT-2011, CDAC, Noida, India, pp

 Abstract: - Improving software development processes is a demanding and complex task. It

 needs continuous improvement in defined processes. This paper proposes a data driven

 approach for process improvement. The major components of this are: (i) Process

identification for improvement, (ii) Data identification and collection for the process

improvement, (iii) Identification of statistical methods/techniques for data analysis, (iv)

Creation of model for process improvement.

 Key Words: Process Performance Baselines, Process Performance Models, Data Analysis,

Statistical Method 

1.  Introduction 

The metrics analysis is one of the required processes for the software development process of any CMMihigh maturity organization. The first step is to identify the users for each metric. The user of the metric is

the person, taking action based upon the metric. There are various types of user for a metrics program.

This adds complexity to the program, because each user may have different information requirements.

Table 1 gives the types of metrics users and their requirements.

Table1: User types and their requirements

S.No. User Type Requirement(s)

1.  Top Management Interested in applying greater control to the software

development process.

Reducing risk and maximizing return on investment.

2.  Project Managers Interested in being able to accurately predict project size,

effort, resources, budgets and schedules.Interested in controlling the projects they are in charge of and

communicating facts to their management.

3.  SoftwareEngineers/Programmers

Interested in making informed decisions about their work and

work products.

4.  Test Managers/Testers Interested in finding as many new defects as possible in the

time allocated to testing and in obtaining confidence that, the

software works as specified.

5.  Specialists

Individuals performing

specialized functions (e.g.,Marketing, Software Quality

Assurance, Software

Configuration Management,

Audits and Assessments).

Interested in quantitative information upon which they can

base their decisions, finding and recommendations

6.  Customers/Users Interested in on-time delivery of high quality software

products and in reducing the over-all cost of ownership.

Establishing Process Performance Baselines and Models for

Statistical Control of Software Projects 

Umesh Kumar Mishra, K. Harihara sudhan, Shalu Gupta

Page 2: quality_4. Establishing Process Performance Baselines & Models for statisical control of Software Project

7/30/2019 quality_4. Establishing Process Performance Baselines & Models for statisical control of Software Project

http://slidepdf.com/reader/full/quality4-establishing-process-performance-baselines-models-for-statisical 2/10

Umesh Kumar Mishra, K. Harihara sudhan, Shalu Gupta

Organizational Goal Identification

Basili and Rombach define a Goal/Question/Metric paradigm that provides an excellent mechanism for

defining a goal-based measurement program for setting up a metrics program is to select one or more

measurable goals. The goals we select to use in the Goal/Question/Metric will vary, depending on the

level we are considering for our metrics. Business goals of C-DAC NOIDA are follows:

1.  To improve the quality of deliverables.

2.  To Increase the reusability of the component.

3.  Reduce the rework.

4.  Improve the Schedule performance

5.  Improve the manpower utilization

At the project level, goals that emphasize project management and control issues or project level

requirements and objectives. These goals typically reflect the project success factors like on time delivery,

finishing the project within budget or delivering software with the required level of quality or

performance. At the specific task level, we consider goals that emphasize task success factors. Many a

times, these are expressed in terms of the entry and exit criteria for the task.

Software metrics programs must be designed to provide the specific information, necessary to manage

software projects and improve software engineering processes and services. Organizational, project and

task goals are determined in advance and then metrics are selected based on those goals. These metricsare used to determine our effectiveness in meeting these goals.

This paper is organized as follows:-

Section 2 deals with Defining Metrics.

Section 3 describes Metrics Identification to achieve the Organizational Goals.

Section 4 describes the procedure for Data collection, Verification and Storage.

Section 5 describes the analysis of collected data.

Section 6 describes the establishment of Process Performance Baselines and Process PerformanceModels.

2.  Defining Metrics

The important step is to agree to standard definitions for the entities and their measured attributes. When

we use terms like defect, problem report, size, and even project , other people will interpret these words in

their own context with meanings that may differ from our intended definition. These interpretation

differences increase when more ambiguous terms like quality, maintainability and user-friendliness are

used. So organizations wide all the measures, definitions need to be mentioned clearly.

Table 2: Sample Measures and Definitions

S.No. Measure Definition

1.  Size Sizes of the projects are estimated using Process basis or using Function

Point Analysis (FPA)

2.  Effort The standard unit to capture the effort is person hours. The following

conversions are used wherever required:

1 PM (Person Month) = 22 Person Days

1 PD (Person Day) = 08 Person Hours

Page 3: quality_4. Establishing Process Performance Baselines & Models for statisical control of Software Project

7/30/2019 quality_4. Establishing Process Performance Baselines & Models for statisical control of Software Project

http://slidepdf.com/reader/full/quality4-establishing-process-performance-baselines-models-for-statisical 3/10

Establishing Process Performance Baselines and Models for Statistical Control of Software Projects

There should have been clear understanding on what is to be measured, how is to be measured, when tobe measured and where to be measured. Additionally, individuals may use different terms to mean the

same. For example, the terms defect report, problem report, incident report, fault report, or customer call

report may be used by various organizations to mean the similar things, but unfortunately they may referto different entities. One external customer may use customer call report to refer to their complaint and

 problem report as the description of the defect in the software, while other customers may use  problem

report for the initial complaint. Differing interpretations of terminology may be one of the biggest barriers

to understanding.

Unfortunately, there is little standardization in the industry of the definitions for most software attributes.

Everyone has an opinion and the debate will probably continue for many years. The approach we suggest

is to adopt standard definitions within your organization and then apply them consistently. You can use

those definitions that do exist within the industry as a foundation to get you started.

3.  Metrics Identification to achieve the Organizational Goals

Each selected metric now has a clear objective to answer one or more questions that needs to be answered

to determine, if we are meeting our goals. When we are selecting metrics, we must be practical, realistic,

and pragmatic. Avoid the "ivory-tower" perspective that is completely removed from the existing

software-engineering environment. Start with what is possible within the current process. Once we have a

few successes, our users will be open to more ideas that are radical and may even come up with a few of 

their own. Table3 shows some metrices derived in C-DAC Noida.Table 3: Metrics Source: C-DAC Noida OPP

Metric Formula

Schedule Variance

((Actual End Date - Planned End Date) / ((Planned End Date - Planned

Start Date)+1)) * 100

Effort Variance ((Actual Effort - Planned Effort) / Planned Effort)*100

Delivered Defect Density

Defects at SIT / Size of the module

Note:- Size of Module = Total number of processes in module

Requirement Stability Index

(1-((New Requirements+Changed Requirements+Deleted

Requirements)/Original Requirements))* 100

SRS Review Effectiveness

SRS Review Defects / (SRS Review Defects + Design Review Defects +

Code Review Defects + Unit Testing Defects) * 100

Design Review Effectiveness

Design Review Defects /(SRS Review Defects + Design Review Defects +

Code Review Defects + Unit Testing Defects) * 100

Code Review Effectiveness

Code Review Defects / (SRS Review Defects + Design Review Defects +

Code Review Defects + Unit Testing Defects) * 100

SRS Review Efficiency SRS Review Defects / SRS Review Effort

3.  Schedule Schedule is measured in Calendar Days i.e., planned date and actual

date.

4.  Defect Any element of a program design ot implementation that must be

changed to correct program [1]. 

Page 4: quality_4. Establishing Process Performance Baselines & Models for statisical control of Software Project

7/30/2019 quality_4. Establishing Process Performance Baselines & Models for statisical control of Software Project

http://slidepdf.com/reader/full/quality4-establishing-process-performance-baselines-models-for-statisical 4/10

Umesh Kumar Mishra, K. Harihara sudhan, Shalu Gupta

Design Review Efficiency Design Review Defects / Design Review Effort

Design Rework Effort Rework at Design phase / Total Actual Work at Design Phase*100

Code Rework Effort Rework at Coding Phase / Total Actual work at Coding Phase*100

Software metrics do not solve problems. People solve problems. Software metrics act as indicators andprovide information so people can make more informed decisions and intelligent choices. An individual

metric performs one of four functions. Metrics can help us to Understand  more about our software

products, processes, and services. Metrics can be used to Evaluate our software products, processes, and

services against established standards and goals. Metrics can provide the information we need to Control 

resources and processes used to produce our software. Metrics can be used to  Predict attributes of software entities in the future. A comprehensive Software Metric program would include metrics that

perform all of these functions.

4.  Data Collection, Data Verification and Storage

The principal tasks associated with collecting and retaining data for process management are as follows:Designing the methods and obtaining the tools that will be used to support data collection and

retention.

Obtaining and training the staff, that will execute the data collection procedures.

Capturing and recording the data for each process that is targeted for measurement.

Using defined forms and formats to supply the collected data to the individuals and groups who

perform analyses.

Monitoring the execution and performance of the activities for collecting and retaining data.

Data are collected from the variuos projects across C-DAC Noida, which has been stored on the Metrics

sheet based on their buckets. Project level data are collected throughout the project lifecycle pertain to the

standard measures like Size, Effort, Schedule, Defects, Effectiveness, Efficiency, etc. Before you begin

analysing measurement data, there are certain criteria that the reported values must satify if your analyseshold any merit or credibility. The criterias like Verity (are of the correct type, are in the correct format,

are within specified ranges, are complete, are arithmetically correct), Synchronicity, Consistency and

Validity. The principal tasks associated with collecting and retaining data for process management are as

follows:

  Project Managers to collect the data for the measures as per the data definitions and defined

frequency.

  Project Managers to report the collected data in relevant metrics templates monthly/Phase end/ 

Release end/Milestones as defined by QA. (Quality Assurance).

  Project teams to perform integrity check for the data that has been collected from the projects.

Integrity checks could include comparison of measures collected with the source of data,

checking the correctness of the source of data (based on sampling) etc., Metrics ReportVerification checklist may be used for this purpose.

  Q.A. is to review metrics reports using “Metrics Report Verification Checklist”, report to theproject teams, and ensure closure of all the review observations. Q.A. has to submit the final

metrics reports to Metrics Council.

  Project Manager to present the Metrics Analysis Reports to the project team.

  Metrics Council to collate the measurement data according to the classifications defined for

projects.

Page 5: quality_4. Establishing Process Performance Baselines & Models for statisical control of Software Project

7/30/2019 quality_4. Establishing Process Performance Baselines & Models for statisical control of Software Project

http://slidepdf.com/reader/full/quality4-establishing-process-performance-baselines-models-for-statisical 5/10

Establishing Process Performance Baselines and Models for Statistical Control of Software Projects

5.  Analysis of Collected data

Data are generally collected as bases for action. No matter what the data or how the values are presented,

you must always use some method of analysis to extract and interpret the information that lies in the data.

Making sense of data is a process in itself, as illustrated schematically in Fig.1

Fig.1 Interpretation of Data Requires Analysis

The reason for analysing process data is to draw inferences that can be used to guide decisions and

actions. Drawing inferences that is, conclusins and predictions from data depends not only on using

appropriate analytical methods and tools but also on understanding the underlysing nature of the data and

the appropriateness of assumptions about the conditions and environment inwhich the data were

obtained[1].

4.03.53.02.52.01.51.0

99

95

90

80

70

60

50

40

30

20

10

5

1

DD at SIT

       P     e     r     c     e     n       t

Mean 2.211

StDev 0.6433

N 22

 AD 0.586

P-Value 0.114

Probability Plot of DD at SITNormal

Fig. 2 Sample graph for Normality test

Before doing any analyses, there is a need to check the normality of data. In statistics [2], normality tests

are used to determine whether a data set is well-modeled by a normal distribution or not, or to compute

how likely an underlying random variable is to be normally distributed. Fig. 2 shows the sample normal

distribution graph, Normality tests can be performed using software tools. For the Project level analysis,

Input Transformation Output

Data InterpretationAnalysis

Page 6: quality_4. Establishing Process Performance Baselines & Models for statisical control of Software Project

7/30/2019 quality_4. Establishing Process Performance Baselines & Models for statisical control of Software Project

http://slidepdf.com/reader/full/quality4-establishing-process-performance-baselines-models-for-statisical 6/10

Umesh Kumar Mishra, K. Harihara sudhan, Shalu Gupta

project teams need to analyse the collected data through various tools. Data analysis needs to be

performed to identify process performance and to make process improvement plan. 

6.  Establishing Process Performance Baseline and Models 

Major steps involved in creating Process Performance Baselines (PPB) are:  Data from the metrics reports submitted for the selected period is consolidated.

  Consolidated output is generated for each lifecycle methodology.

  The consolidated outputs are reviewed for completeness and integrity.

  For data points of modified waterfall methodology, the data points are segregated into full

lifecycle and partial lifecycle execution data points.

  The extreme outliers, which look like errors, are removed from the consolidated output and

the same has been informed to project manager for root cause analysis.

  If the number of data points is low, data for the previous PPB period are merged and then the

analysis is carried out.

  The outliers falling beyond the process limits (based on 3 sigma limits) are identified and

recorded separately.  Data points with these variations are analysed for the causes so that the special causes are

identified and are removed from the data set and the process.

  Such data points are informed to the concerned project teams so that actions are taken on

these special cause variations so that the process is stabilized.

  All other data points that include the common cause variations are considered for establishing

organizational baselines.

The process performance baselines are computed as follows:

o  Mean and 95% confidence intervals for Mean and SD are computed using

Minitab Graphical Summary for all the metrics that are defined in the

measurement system and are reported in the baseline report (PPB).o  For the selected metrics that define the quality and process performance

objectives, the statistical significance in difference between the current data set

and the data set for the previous PPB is established by hypothesis testing (2t test).

o  If there is no statistically significant difference established, the targets for the

metrics are retained for the current period also.

o  If there is a statistically significant difference established, then, 1t test is done

against the hypothesized mean of the previous baseline and the targets are revised

accordingly.

o  However, the targets that need to be fixed in a particular period will be at the

discretion of top management, based on the business objectives.

  The baselines reported in the PPB report is reviewed and approved by SEPG.

Table 4: Sample Process Performance Baselines Source: C-DAC Noida OPP[8] 

Page 7: quality_4. Establishing Process Performance Baselines & Models for statisical control of Software Project

7/30/2019 quality_4. Establishing Process Performance Baselines & Models for statisical control of Software Project

http://slidepdf.com/reader/full/quality4-establishing-process-performance-baselines-models-for-statisical 7/10

Establishing Process Performance Baselines and Models for Statistical Control of Software Projects

S.No. Metrics UnitResults upto September 2010 Results upto December 2010

Bucket LCL CL UCL Bucket LCL CL UCL

1.  S.V at SIT %

   E   G

   &    E

   S

-18% 31% 80%

   E   G

   &    E

   S

-19% 12% 43%

2. E.V at SIT %

-

108% 91% 291% -9% 6% 20%

3. DD at SIT

Defects / Process (or)

F.P 0 5.04 13.83 0.00 1.28 5.72

4.  Rework Effort at

Coding Phase % 0% 22% 57% 0% 4% 18%

5.  SRS Review

Effectiveness % 0% 39% 100% 0% 10% 40%

6.  Design Review

Effectiveness % 0% 34% 113% 0% 9% 28%

7.  Code Review

Effectiveness % 5% 52% 100% 0% 43% 108%

8.  SRS Review

Efficiency

Defects / 

Hour 0.00 1.70 4.47 0.00 2.08 7.91

9.  Design ReviewEfficiency

Defects / Hour 0.00 0.88 2.31 0.00 0.09 0.28

10.  Design Rework 

Effort % 0% 12% 47% 0% 2% 10%

11.  S.V at SIT %

   H   I   S

-25% 31% 87%

   H   I   S

-10% 10% 30%

12.  E.V at SIT % -48% 0% 48% -8% 9% 26%

13. DD at SIT

Defects / 

Process 0.00 2.95 11.50 0.00 1.75 5.16

14.  Rework Effort at

Coding Phase % 0% 4% 13% 4% 8% 13%

15.  SRS Review

Effectiveness % 0% 36% 103% 2% 21% 40%16.  Design Review

Effectiveness % 0% 26% 76% 28% 40% 52%

17.  Code Review

Effectiveness % 0% 40% 102% 27% 39% 51%

18.  SRS Review

Efficiency

Defects / 

Hour 0.00 0.91 3.07 0.00 0.44 1.27

19.  Design Review

Efficiency

Defects / 

Hour 0.00 0.38 1.20 0.00 0.09 0.25

20.  Design Rework 

Effort % 0% 4% 18% 0% 4% 10%

Process Performance Models (PPM) is used to represent past and current process performance and to

 predict future results of the process. PPMs‟ reflect the process behaviour. Process performance models

estimate and predict the value of process performance measures from other measurements.

Steps for establishing Process Performance Models:

  Identify the measures that need to be monitored for tracking towards the goals fixed for a

project (Dependent variable or Predictor variable- y”). 

Page 8: quality_4. Establishing Process Performance Baselines & Models for statisical control of Software Project

7/30/2019 quality_4. Establishing Process Performance Baselines & Models for statisical control of Software Project

http://slidepdf.com/reader/full/quality4-establishing-process-performance-baselines-models-for-statisical 8/10

Umesh Kumar Mishra, K. Harihara sudhan, Shalu Gupta

  Identify the factors that influence the predictor variable (i.e. for each measure, identify the

influencing measures from the same or other sub processes. (“Independent variable” or 

Response variables - x”) 

  After identifying the influencing variables, build a regression equation individually with the

response variables and check the following statistics:

o  P value: p value can be used to conclude which x factor to be considered that add value

to the model.

o  R2

: R-Sqr value is reviewed to see how much % of y‟s behaviour is explained by the

model. This is a statistical measure (number from 0 to 1 or 0% to 100%) of how closely

the estimated values for the trend line correspond to the actual data; an R2

value of 1.0

(100%) indicates a perfect fit. This is also known as the coefficient of determination.

The formula for R is: R(X, Y) = {Covariance(X,Y) } / {Std Dev(X) * Std Dev(Y)}].

o  Residual Plot: A residual plot allows you to determine if your regression model is a

good fit to your data.

  The independent variables that have desirable statistical values (R2

and p values) as above

are combined and multiple regression equations are generated on different combinations.

  If the R2 values do not show a strong correlation, check for normality of the data and do F-

test to check the difference in variances of the assumed buckets

o  Use F-test for determining which t-test (for equal and unequal variances) would be

applicable for the data.

  If the difference is significant, then stratify the data into meaningful buckets and establish

the regression equations.

  The regression equation that gives the most value adding results (good p and R-Sqr values)

are considered as Process Performance Models and are published in QMS for use by

projects. The coefficient of the equation explains the influence of each independent variable

(x) on dependent variable (y).

  A few pre-requisites for developing PPMs‟ are as follows:

o  Homogeneity of sample is ensured by using statistical tests.

o  Generally the controllable x‟s are preferred, so that the performance of controllable x

can be improved which results in improvement of y.

Some PPM for each bucket is given below:

Schedule variance (S.V.) for E-Governance bucket is derived from 12 data points.

S.V.at SIT = 0.0180 - 3.11 SRS Review Effectiveness -2.31 Design Review Effectiveness + 0.0725 DD at

SIT.

Defect Density (D.D.) for E-Governance bucket is derived from 17 data points.

D.D. = 1.34 - 13.9 SRS Review Effectiveness - 2.16 Design Review Effectiveness - 1.40 Code Review

Effectiveness

Rework at Coding Phase for Health Informatics bucket is derived from 14 data points

Rework at Coding Phase= 0.0477 - 0.0182 SRS Review Efficiency- 0.0578 Design Review Efficiency+0.00564 Defect Density.

Page 9: quality_4. Establishing Process Performance Baselines & Models for statisical control of Software Project

7/30/2019 quality_4. Establishing Process Performance Baselines & Models for statisical control of Software Project

http://slidepdf.com/reader/full/quality4-establishing-process-performance-baselines-models-for-statisical 9/10

Establishing Process Performance Baselines and Models for Statistical Control of Software Projects

7.  Conclusion

Well-designed Statistical approach for process improvement with documented objectives should help an

organization to obtain the information that it needs to continue to improve its software products,processes, and services along with maintaining a focus on what is important. This paper provides a

practical, systematic, start-to-finish method of selecting, designing, and implementing software processes

and metrics. This was found to be a valuable aid for C-DAC Noida Quality Improvement.

Acknowledgment

This work was carried out under the CMMI implementation project funded by the Core grand. The

authors wish to thank Dr. George Varkey, Executive Director C-DAC Noida and Ms. R.T. Sundari,

Consultant C-DAC Noida for supporting this work and providing valuable feedbacks.

References

[1] William A. Florac, Anita D. Carleton “Measuring the Software Process, Statistical Process

Control for Software Process Improvement”SEI series in Software Engineering.[2] http://en.wikipedia.org/wiki/Normality_test

[3] 12 steps to Useful Software Metrics by Linda Westfall, American Society for Quality (ASQ)

Software Division.

[4] Pankaj Jalote “Use of Metrics in High Maturity Organizations” Department of Computer Science

and Engineering Indian Institute of Technology Kanpur, India – 208016

[5] Standard CMMI Appraisal Method for Process Improvement (SCAMPI),

[6] Ramesh Pusala “Operational Excellence through Efficient Software Testing Metrics“ Infosys

Technology India

[7] Stephen H. Kan “Metrics and Models in Software Quality Engineering” Second Edition By

Addison Wesley 2002.

[8] http://qms.cdacnoida.in/QMS/ 

About Authors

Umesh Kumar Mishra had completed M.Tech (Computer Technology &Application) from UIT

RGPV. He is ISEB certified in Software Testing and Microsoft certified Technology and is

Specialist (MCTS) in Microsoft Technology. He is associated with C-DAC Noida with past 3

year. He has published 13 international and national research papers. His research interest includes

Object Oriented Software Engineering, Software Testing and Software Quality Assurance.

K.Harihara sudhan, an Associate Project Engineer at C-DAC, received his Bachelors in

Computer Science & Engineering from Anna University and is presently pursuing his Masters in

Quality Management from BITS Pilani. He has been working on Software Quality Control and

Software Quality Assurance for the past 4.5 years. He is highly proficient in High Maturity

Practices and was a core member of C-DAC‟s CMMi Implementation team, at all levels. His area

of interest includes Software Quality Control, Software Quality Assurance, Software Metrics and

Statistical Process Control.

Ms. Shalu Gupta (Scientist-„C‟) joined C-DAC in 2008. She has eight years of experience in

software development. She has worked in the field of NMS, SNMP, Optical comm., DSLAM

OCR and Quality Assurance. She has worked in various companies like C-DoT, Wipro

Technology and Flextronics Software Systems. Currently she is associated with the Quality

Page 10: quality_4. Establishing Process Performance Baselines & Models for statisical control of Software Project

7/30/2019 quality_4. Establishing Process Performance Baselines & Models for statisical control of Software Project

http://slidepdf.com/reader/full/quality4-establishing-process-performance-baselines-models-for-statisical 10/10

Umesh Kumar Mishra, K. Harihara sudhan, Shalu Gupta

Assurance Group. She has published 5 international and national research papers. Her area of 

interest includes Software Quality Assurance and Software Metrics.

OK

CHECKED BY: DR. KAVITA TIWARI.