Analysis and Interpretation

42
1 Analysis and Interpretation: Overview Analyses Narrative: summary and discussion Quantitative: involving statistical analysis (including meta-analysis) Meta-analysis should only be used when appropriate Inappropriate to define a systematic review as high quality based on whether it contains a meta-analysis

description

Cochrane Review author training workshop, January 22-23, 2009 at the University of Calgary Health Sciences Centre

Transcript of Analysis and Interpretation

Page 1: Analysis and Interpretation

1

Analysis and Interpretation: Overview

Analyses– Narrative: summary and discussion– Quantitative: involving statistical analysis

(including meta-analysis)

Meta-analysis should only be used when appropriate

Inappropriate to define a systematic review as high quality based on whether it contains a meta-analysis

Page 2: Analysis and Interpretation

2

Framework for synthesis

Whether narrative or quantitative, a general framework for synthesis:

1. What is the direction of effect?

2. What is the size of the effect?

3. Is the effect consistent across studies?

4. What is the strength of evidence for the effect?

Page 3: Analysis and Interpretation

3

Why Perform a Meta-analysis?

Increases statistical power To improve precision Answer questions not posed by individual

studies Settle controversies from conflicting studies

or generate new hypotheses

Meta-analyses: derive meaningful conclusions from data and help prevent

errors in interpretation

Page 4: Analysis and Interpretation

4

More on Meta-analysis

What it is not: adding up all the patients among trials; trials need to be weighted

May be possible to conduct for some comparisons/outcomes in a review and not for others

Need to determine whether the studies are similar enough to be meta-analyzed

Need to make a decision as towhether it is appropriate!

Page 5: Analysis and Interpretation

5

When not Appropriate to do M/a

If studies are clinically diverse– Results may be meaningless– Genuine differences may be obscured

If a mix of comparisons → determine which need to be assessed separately

If outcomes too diverse If includes studies at risk of bias, these

results may be misleading Presence of serious publication or reporting

biases

Page 6: Analysis and Interpretation

6

Dichotomous Measures

Whether individual study or meta-analysis:

Relative measures: Risk ratio (RR) or Odds ratio (OR)

Absolute measure: Risk difference (RD) Number needed to treat (NNT)

Page 7: Analysis and Interpretation

7

Risk ratio (RR) aka relative risk

RR = a / (a+b) c / (c+d)

Risk/ probability/ chance of the occurrence of an event in treatment relative to control

a b

c d

Intervention

Control

a+b=nI

c+d=nC

Event No event

Page 8: Analysis and Interpretation

8

Sample RR Calculation

Death No death

14 119

128 20

RR = 14/133 = 0.11 = 0.13 128/148 0.86

Drug 133

148Placebo

Page 9: Analysis and Interpretation

9

Odds ratio (OR)

Intervention

Control

No eventEvent

a b

c d

OR = a / b c / d

Odds of an event occurring to it not occurring for treatment relative to control

a+b=nI

c+d=nC

Page 10: Analysis and Interpretation

10

Sample OR Calculation

Death No death

14 119

128 20

Drug

Placebo

133

148

OR = 14/119 = 0.12 = 0.019 128/20 6.4

Page 11: Analysis and Interpretation

11

Interpreting (for intervention)

Good outcome (e.g. remission)

Bad outcome

(e.g. infection)

RR<1 (0.11/0.86) Reduced risk

(not beneficial)

Reduced risk

(beneficial)

RR>1 (0.86/0.11) Increased risk

(beneficial)

Increased risk

(harmful)

OR=1, RR=1 No difference No difference

OR<1 (0.12/6.4) Reduced odds

(not beneficial)

Reduced odds

(beneficial)

OR>1 (6.4/0.12) Increased odds

(beneficial)

Increased odds (harmful)

Page 12: Analysis and Interpretation

12

RR vs. OR

Different measures – people make the mistake of interpreting them to be the same

Similar values when events are rare, but differences noted when events are common:– When Rx increases chances of events, OR>RR– When Rx decreases chances of events, OR<RR In both cases, if OR interpreted as RR, leads

to overestimation of the intervention effect!

RR for an event vs non-event not the same!

Page 13: Analysis and Interpretation

13

Closer Look at Odds

RR = 0.11 / 0.86 = 0.13↑

A rate (11%)

OR = 0.12 / 6.4 = 0.019↑

~1:9↑

~7:1

Page 14: Analysis and Interpretation

14

Absolute Effect Measures

Relative measures don’t tell you the actual number of participants who benefited– RR 2.0….same for 80% vs 40% as for 10% vs

5%...but these are very different event rates!

Page 15: Analysis and Interpretation

15

Risk Difference (RD)

Death No death

14 119

128 20

Actual difference in risk of events

Placebo

Drug 133

148

RD = 14/133 – 128/148 = 0.11 – 0.86 = - 0.75

Page 16: Analysis and Interpretation

16

Risk Difference (RD) (continued)

• RD = 0, no difference between groups

• RD<0 reduces risk (☺ for bad outcome, not for good outcome)

• RD>0 increases risk (☺ for good outcome, harmful for bad)

Page 17: Analysis and Interpretation

17

NNT

Expected number of people who need to receive the experimental rather than the comparator intervention for one additional person to incur or avoid an event in a give time frame

If a single study, can calculate from RD Cannot be combined in a meta-analysis; need to

calculate from another meta-analysis summary statistic

From a meta-analysis, should be calculated from either OR or RR

Chapter 12

Page 18: Analysis and Interpretation

18

Uncertainty

Confidence interval, usually 95%– Range of values above and below the calculated

treatment effect within which we can be reasonably certain (e.g., 95% certain) that the real effect lies.

– For RR and OR, results are statistically significant if CI does not include 1

– For RD, results are statistically significant if CI does not include 0

Page 19: Analysis and Interpretation

19

Which effect measure for meta-analysis?

Relative effect measures are, on average, suggested to be more consistent than absolute measures (empirical evidence)

Avoid RD unless clear reason to suspect consistency

Generally recommend: RR or OR, but remember risk of misinterpretion of OR

Page 20: Analysis and Interpretation

20

Meta-analysis in RevMan

Page 21: Analysis and Interpretation

21

Meta-analysis in RevMan (continued)

Formulae for calculating effect measures and confidence intervals available on cochrane.org

Not available in RevMan: meta-regression

Page 22: Analysis and Interpretation

22

Fixed vs Random Effects

Fixed effects: true effect of intervention (magnitude and direction) is the same value in every study

– ‘typical intervention effect’– No study-to-study variability– Only within study variability

Random effects: effects being estimated among studies are not identical but follow some distribution

– studies estimating different, yet related, intervention effects– estimate and CI: centre of the distribution of effects

Page 23: Analysis and Interpretation

23

Fixed Effects Analysis in Picture View

Page 24: Analysis and Interpretation

24

Random Effects Analysis in Picture View

Page 25: Analysis and Interpretation

25

Random effects in RevMan 5

← DerSimonian and Laird random effects model

Page 26: Analysis and Interpretation

26

Random effects in RevMan 5 (continued)

← DerSimonian and Laird random effects model

Page 27: Analysis and Interpretation

27

Sample Forest plot (RR)

# pts with events & total pts in each group

Page 28: Analysis and Interpretation

28

Meta-analysis for Continuous Data

Two effect measures for data with normal distribution: MD and SMD

Data: Sample size, mean, standard deviation (SD) Don’t confuse SD with standard error (SE)

SD = SE x √n Fixed or random effects analysis For change-from-baseline data: Chapters 7 and 9 Skewed data: Chapter 9

Page 29: Analysis and Interpretation

29

Mean Difference (MD)

Formerly called weighted mean difference When studies use same scale for outcomes

Page 30: Analysis and Interpretation

30

Standardized Mean Difference (SMD)

Use when trials assess the same outcome but measure in a variety of ways, including using different scales

Page 31: Analysis and Interpretation

31

Heterogeneity

Any kind of variability among studies

Clinical: participants, interventions, outcomes– True intervention effect will be different in different studies

Methodologic: trial design, quality– Studies not estimating same quantity, suffer different

degrees of bias Statistical: from clinical or methodologic…or both!

– Observed effects of intervention are more different than that expected by chance

– In practice, can be difficult to separate the influence of clinical vs methodologic on observed statistical heterogeneity…likely due to both

Page 32: Analysis and Interpretation

32

Clinical and Methodologic Heterogeneity

Are differences across studies so great that they should not be combined?

At protocol stage, specify factors that you plan to investigate as potential causes of heterogeneity Be transparent with a priori vs post hoc investigations of heterogeneity in a review

Page 33: Analysis and Interpretation

33

Statistical Heterogeneity

To what extent are the results consistent? Q test and I2 statistic

Page 34: Analysis and Interpretation

34

Q test

Q test: ‘chi-squared’ statistic– Care must be taken in interpretation– Low power with few studies or small sample size

Just because stat is not significant doesn’t mean absence of heterogeneity

– High power with many studies Heterogeneity detected may not be clinically important

– Use P value cut-off of 0.10 to compensate

Page 35: Analysis and Interpretation

35

I2 Statistic

Instead of testing whether there, assess impact

I2 quantifies extent of inconsistency– Percentage of variability in effect estimates that is

due to heterogeneity rather than chance

Page 36: Analysis and Interpretation

36

I2 Statistic (continued)

I2 value Guide to Interpretation

0% to 40% Might not be important

30% to 60% May represent moderate heterogeneity*

50% to 90% May represent substantial heterogeneity*

75% to 100% Considerable heterogeneity*

* Importance of I2 value depends on:● magnitude and direction of effects● strength of evidence of heterogeneity

- Chi-squared P value, or- I2 confidence interval

Page 37: Analysis and Interpretation

37

Sample Forest Plot: Q and I2

Page 38: Analysis and Interpretation

38

What to do with (Statistical) Heterogeneity

Check that data are correct Do not do the meta-analysis…may be misleading Explore heterogeneity

– Subgroup analyses– Meta-regression

Ignore it– Fixed effects ignore heterogeneity – ignoring may

mean an intervention effect that does not actually exist

Page 39: Analysis and Interpretation

39

What to do with (Statistical) Heterogeneity

Random effects meta-analysis– Incorporates heterogeneity but is not a

substitution for a thorough investigation Exclude studies

– Sensitivity analysis

Page 40: Analysis and Interpretation

40

Subgroup and Meta-regression

– Chapter 9– Observational in nature– Characteristics used should be prespecified; keep

to a minimum– Conclusions from such analyses should be

interpreted with caution – Subgroups: splitting all studies into groups to

make comparisons– Meta-regression: extension of subgroup analysis,

allows investigation of continuous and categorical variables

Page 41: Analysis and Interpretation

41

Subgroup Analysis

Page 42: Analysis and Interpretation

42

Sensitivity Analysis

Chapter 9 Addresses the question: Are the findings robust to

the decisions make in the process of obtaining them?

Repeats the primary analysis and substitutes alternative decisions for decisions or range of values that were arbitrary or unclear

Some can be prespecified in the protocol but many issues are identified only during the review process

Don’t confuse with subgroup analysis