Validity & reliability seminar

33
VALIDITY & RELIABILITY DR. MOUSUMI SARKAR; PGT COMMUNITY MEDICINE

Transcript of Validity & reliability seminar

VALIDITY & RELIABILITY

VALIDITY & RELIABILITYDR. MOUSUMI SARKAR; PGT COMMUNITY MEDICINE

VALIDITY

Validity is the extent to which a test measures what it claims to measure.

RELIABILITY

Reliability is the extent to which an experiment, test, or any measuring procedure shows the same result on repeated trials.

TYPES OF VALIDITY

Content validityFace validityCurricular validity

Criterion-related validityPredictive validityConcurrent validity

Construct validityConvergent validityDiscriminant validity

Content Validity: It is the extent to which the measurement method covers the entire range of relevant behaviors, thoughts, and feelings that define the construct being measured.

Face Validity:

It is the extent to which the measurement method appears on its face to measure the construct of interest.

Curricular Validityis the extent to which the content of the test matches the objectives of a specific curriculum .

Criterion validity It is the extent to which peoples scores are correlated with other variables or criteria that reflect the same construct.

TYPES OF CRITERION VALIDITY:

Predictive Validity:

When the criterion is something that will happen or be assessed in the future, this is called predictive validity.

Concurrent Validity: When the criterion is something that is happening or being assessed at the same time as the construct of interest, it is called concurrent validity.

Construct Validity

Construct validity refers to the degree to which a test or other measure assesses the underlying theoretical construct it is supposed to measure.

Convergent validity

consists of providing evidence that two tests that are believed to measure closely related skills or types of knowledge correlate strongly.

Discriminant validity

by the same logic, consists of providing evidence that two tests that do not measure closely related skills or types of knowledge do not correlate strongly.

Ecological Validity:

It refers to the extent to which the findings can be generalized beyond the present situation.

External Validity: It is the extent to which the results of a research study can be generalized to different situations, different groups of people, different settings, different conditions, etc.

Internal Validity:

It is basically the extent to which a study is free from flaws and that any differences in a measurement are due to an independent variable and nothing else.

Steps for Assessing Validity of an Experimental StudyStepAssessment ProcessDecision1. Validity of statistical conclusionAssess statistical significance(i.e.,pvalue is0.05 and statistical results are valid).Difference is real and is not likely due to chance variation; proceed to next step./Difference is likely due to chance variation; stop here.2. Internal validityAssess internal validity on basis of research design and operational procedures.Difference is most likely due to the treatment; proceed to next step./Difference is probably due to the effects of confounding factors or bias; stop here.3. External validityExamine inclusion and exclusion criteria and characteristics of study participants.Study participants are similar to patients the report reader sees; the treatment should be useful./Study participants are very different from patients the report reader sees; the treatment may or may not be useful.

Measurement of Validity

Important points to evaluate the validity of a measurement method.

1. First, this process requires empirical evidence. A measurement method cannot be declared valid or invalid before it has ever been used and the resulting scores have been thoroughly analyzed.

2. Second, it is an ongoing process. The conclusion that a measurement method is valid generally depends on the results of many studies done over a period of years.

3. Third, validity is not an all-or-none property of a measurement method. It is possible for a measurement method to judged "somewhat valid" or for one measure to be considered "more valid" than another.

Measures of validity

Sensitivity

Specificity

AUC

ROC curve

Factors Affecting Validity

HistoryMaturationTestingInstrumentationStatistical RegressionExperimental MortalityCompensatory Rivalry by Control Group Compensatory Equalization of TreatmentsResentful Demoralization of Control Group

RELIABILITY

Unable to satisfactorily draw conclusions, formulate theories, or make claims about the generalizability of the research.

Types of reliability are:

Equivalency

Stability

Internal consistency

Inter-rater

Intra-rater

Equivalency reliability :-

The extent to which two items measure identical concepts at an identical level of difficulty.

Equivalency reliability is determined by relating two sets of test scores to one another to highlight the degree of relationship or association.

Stability reliability (test, re-test reliability ):-

It is the agreement of measuring instruments over time.

To determine stability, a measure or test is repeated on the same subjects at a future date. Results are compared and correlated with the initial test to give a measure of stability.

INTERNAL CONSISTENCY

Internal consistency is the extent to which tests or procedures assess the same characteristic, skill or quality.

It is a measure of the precision between the measuring instruments used in a study.

Inter-rater reliability :- The extent to which two or more individuals (coders or raters) agree.

Intra-rater reliability : is a type of reliability assessment in which the same assessment is completed by the same rater on two or more occasions.

Reliability Coefficient

Pearson product moment correlation:- The extent to which the relation between two variables can be described by straight line.

Bland-Altman analysis:-

A plot of difference between two observations against the means of the two observations.

Cohens Kappa:-

How much better is their level of agreement is than that which result just from chance

Threats to ReliabilitySubject Reliability: Factors due to research subject.

Observer Reliability: Factors due to observer/rater/interveiwer.

Situational Reliability: Conditons under which measurements are made(eg, busy day at the clinic)

Instrument Reliability: The research instrument or measurement approach itself.(eg, poorly worded questions, quirk in mechanical devices)

Data processing Reliability: Manner in which data are handled(eg; miscoding).

Relationship of validity & reliability

Validity and reliability are closely related.

A test cannot be considered valid unless the measurements resulting from it are reliable.

Likewise, results from a test can be reliable and not necessarily valid.

The proponents of the exclusive trend claim that the terms validity & Reliability do not make sense in qualitative research, so they should be replaced.

Internal validity-Credibility

External validity-Transferability

Reliability- Dependability

ValidityDoes it measure what it is supposed to measure?ReliabilityHow representative is the measurement?