# Chapter 17 Comparing Two Proportions

date post

18-Jan-2016Category

## Documents

view

27download

0

Embed Size (px)

description

### Transcript of Chapter 17 Comparing Two Proportions

*Chapter 17 Comparing Two Proportions

In Chapter 17:17.1 Data17.2 Risk Difference17.3 Hypothesis Test17.4 Risk Ratio 17.5 Systematic Sources of Error17.6 Power and Sample Size

Data conditionsBinary response variables (success/failure)Binary explanatory variable Group 1 = exposedGroup 2 = non-exposedNotation:

Sample ProportionsSample proportion (average risk), group 1:Sample proportion (average risk), group 2:

Example: WHI Estrogen TrialRandom AssignmentGroup 1 n1 = 8506Group 2 n2 = 8102Estrogen TreatmentPlaceboCompare risks of index outcome**Death, MI, breast cancer, etc.

2-by-2 Table

SuccessesFailuresTotalGroup 1a1b1 n1Group 2a2b2n2Totalm1m2N

WHI Data

D+DTotalE+75177558506E-62374798102Total13741523416608

Proportion Difference (Risk Difference)Quantifies excess risk in absolute terms

In large samples, the sampling distribution of the risk difference is approximately Normal

(1 )100% CI for p1 p2Plus-four method

Estrogen Trial, 95% CI for p1p2Data: a1 = 751, n1 = 8506, a2 = 623, n2 = 8102

95% CI for p1p2Excess risk of between 0.3% and 2.0% (in absolute terms)

95% CI for p1 p2Plus-four method similar to Wilsons score method. Output from WinPepi > Compare 2 program:

17.3 Hypothesis TestA. H0: p1 = p2 (equivalently H0: RR = 1)B. Test statistic (three options) z (large samples)Chi-square (large samples, next chapter)Fishers exact (any size sample)C. P-value D. Interpret evidence against H0

z TestA. H0: p1 = p2 vs.Ha:p1 p2 (two-sided) B.

C. One-sided P = Pr(Z |zstat|) Two-sided P = 2 one-sided P

z Test ExampleA. H0: p1 = p2 against vs. Ha:p1 p2B. Test statistic

One-sided P = Pr(Z 2.66) = .0039 Two-sided P = 2 .0039 = .0078

The evidence against H0 is v. significant proportions (average risks) differ significantly

z Test: Notesz statisticNumerator = observed differenceDenominator = standard error when p1 = p2 A continuity correction can be optionally applied (p. 382)Equivalent to the chi-square test of association (HS 267)Avoid z tests in small samples; use exact binomial procedure (HS 267)

Fishers Exact TestAll purpose test for testing H0: p1 = p2Based on exact binomial probabilitiesCalculation intensive, but easy with modern softwareComes in original and Mid-Probability corrected forms

Example: Fishers TestData. The incidence of colonic necrosis in an exposed group is 2 of 117. The incidence in a non-exposed group is 0 of 862.

Ask: Is this difference statistically significant? Hypothesis statements. Under the null hypothesis, there is no difference in risks in the two populations. Thus: H0: p1 = p2 Ha: p1 > p2 (one-sided) or Ha: p1 p2 (two-sided)

Fishers Test, ExampleB. Test statistic none per se

C. P-value. Use WinPepi > Compare2.exe > A.

D.Interpret. P-value = .014 strong (significant) evidence against H0

D+DE+2115E0862

17.4 Proportion Ratio (Relative Risk)Relative risk is used to refer to the RATIO of two proportionsAlso called risk ratio

Example: RR (WHI Data)

+TotalEstrogen +75177558506Estrogen 62374798102

InterpretationThe RR is a risk multiplierRR of 1.15 suggests risk in exposed group is 1.15 times that of non-exposed groupThis is 0.15 (15%) above the relative baselineWhen p1 = p2, RR = 1. Baseline RR is 1, indicating no associationRR of 1.15 represents a weak positive association

Confidence Interval for the RRln natural log, base eTo derive information about the precision of the estimate, calculate a (1 )100% CI for the RR with this formula:

90% CI for RR, WHI

D+DTotalE+75177558506E62374798102

WinPepi > Compare2.exe > Program BSee prior slide for hand calculations

D+DTotalE+75177558506E 62374798102

Confidence Interval for the RRInterpretation similar to other confidence intervalsInterval intends to capture the parameter (in this case the RR parameter)Confidence level refers to confidence in the procedureCI length quantifies the precision of the estimate

17.5 Systematic ErrorCIs and P-values address random error onlyIn observational studies, systematic errors are more important than random errorConsider three types of systematic errors:ConfoundingInformation biasSelection bias

ConfoundingConfounding = mixing together of the effects of the explanatory variable with the extraneous factors. Example: WHI trial found 15% increase in risk in estrogen exposed group. Earlier observational studies found 40% lower in estrogen exposed groups. Plausible explanation: Confounding by extraneous lifestyles factors in observational studies

Information BiasInformation bias - mismeasurement (misclassification) leading to overestimation or underestimation in riskNondifferential misclassification (occurs to the same extent in the groups) tends to bias results toward the null or have no effectDifferential misclassification (one groups experiences a greater degree of misclassification than the other) bias can be in either direction.

Nondifferential & Differential Misclassification - Examples

Selection BiasSelection bias systematic error related to manner in which study participants are selectedExample. If we shoot an arrow into the broad side of a barn and draw a bulls-eye where it had landed, have we identified anything that is nonrandom?

Sample Size & Powerfor Comparing ProportionsThree approaches: n needed to estimate given effect with margin of error m (not covered in Ch 17)n needed to test H0 at given and powerPower of test of H0 under given conditions

Sample Size Requirements for Comparing ProportionsDepends on:r sample size ratio = n1 / n21 power (acceptable type II error rate) significance level (type I error rate)p1 expected proportion, group 1p2 expected proportion in group 2, or expected effect size (e.g., RR)

Calculation Formulas on pp. 396 402 (complex)In practice use WinPEPI > Compare2.exe > Sample size

WinPepi > Compare2 > S1

Chapter 17*Basic Biostat*

Recommended

*View more*