Assessment of Bias

52
1 Risk of Bias Assessment Handbook, Chapter 8

description

Cochrane Review author training workshop, January 22-23, 2009 at the University of Calgary Health Sciences Centre

Transcript of Assessment of Bias

Page 1: Assessment of Bias

1

Risk of Bias Assessment

Handbook, Chapter 8

Page 2: Assessment of Bias

2

Why assess study quality? We now assess Risk of Bias

if poor quality trials are the building blocks of the review, the review may follow high quality methods, but the quality of evidence may still be poor

Page 3: Assessment of Bias

3

What is bias?

A systematic error or deviation from the truth in results or inferences

Can occur in either direction or vary in direction

Can vary in magnitude: small to large

Results of a study may be unbiased despite a methodologic flaw…therefore, should consider risk of bias

Page 4: Assessment of Bias

4

What is bias? (continued)

Differences in risk of bias can help explain variation in the results of studies

Important to assess in all studies regardless if anticipated variability in results or validity of the included studies

Used to help judge the quality of evidence

Page 5: Assessment of Bias

5

What is bias? (continued)

Cochrane Bias Methods Group provides the methodologic guidance for assessing and addressing bias in Cochrane reviews

Researches empirical evidence behind various biases

Details on empirical evidence in Chapter 8

Page 6: Assessment of Bias

6

What is bias? (continued)

Aren’t we supposed to talk about quality?

A study may be conducted to the highest possible standards but still have an important risk of bias

– For some interventions, cannot blind investigators or participants → acceptable due to nature of intervention but is not free of bias

Other markers of ‘quality’ unlikely to have direct implications for risk of bias eg, reporting a study according to CONSORT guidelines

Risk of bias overcomes ambiguity between quality of reporting and quality of the research that was conducted

Page 7: Assessment of Bias

7

What is bias? (continued)

How it differs from precision Bias: systematic error

– Repeating the study multiple times would reach the wrong answer on average

Imprecision: random error– Different effect estimates because of sampling

variation– Smaller studies…greater sampling variation…less

precise– Reflected in confidence interval

Page 8: Assessment of Bias

8

What tool do we use?

Use of scales explicitly discouraged– Not supported by empirical evidence…difficult to

justify weights that are used for summary scores– Often based on whether something was reported

rather than if done appropriately

No longer use the Jadad scale

Page 9: Assessment of Bias

9

Collecting information

Focus is at the individual study level Include in data extraction form Distinguish reporting from conduct

– If not reported, you can’t determine whether it was done

Incomplete reporting an issue– Use open-ended quesitons when asking trial

authors for information…may help to reduce overly positive answers

Page 10: Assessment of Bias

10

Sources of bias in clinical trials

Focus of session on RCTs

Page 11: Assessment of Bias

11

Risk of Bias tool

Recommended tool for assessing risk of bias in Cochrane reviews

Not a scale or a checklist but a domain-based evaluation

Two parts: (1) describe what was reported (2) judgement based on that information

Question always framed so that:– Yes = low risk of bias– No = high risk of bias– Unclear = unclear or unknown risk of bias

Page 12: Assessment of Bias

12

Risk of Bias tool (continued)

Description– Transparency for how judgements made– Should include verbatim quotes from reports or

correspondence– May include a summary of known facts or

comment from the review authors– Should include other information that influences

judgement– When no information available, state explicitly

Page 13: Assessment of Bias

13

Risk of Bias tool (continued)

Examples of descriptions

Page 14: Assessment of Bias

14

Risk of Bias tool (continued)

‘Unclear’ judgements– If insufficient detail reported– If what happened in study is know but the risk of

bias is unknown– Outcome not measured in a study– RevMan 5: if text box left empty, then will be

omitted in published version

Process: Collect information → Make judgements of risk → Make summary assessments → Incorporate into analyses (Chapter 8)

Page 15: Assessment of Bias

15

Sequence generation

Mechanism for allocating intervention to participants

Adequate methods (randomization)– eg, random number table, computer random number

generator, coin toss, throwing dice

Inadequate methods (non-random)– Eg, date of birth, alternation, allocation by judgement of the

investigator

Unclear– eg, ‘we randomly allocated’, ‘using a randomized design’

Page 16: Assessment of Bias

16

Selection bias

systematic differences in participant characteristics at the start of a trial

Intervention group Control group

Page 17: Assessment of Bias

17

Allocation concealment

Preventing foreknowledge of the next allocations

What is used to implement the sequence

Don’t confuse with blinding of participants, personnel, etc

Page 18: Assessment of Bias

18

Allocation concealment (continued)

Adequate methods– Eg, central allocation; sequentially numbered,

opaque, sealed envelopes Inadequate methods

– Eg, posted list of random numbers, alternation, date of birth, envelopes met 2 of 3 criteria

Unclear– Insufficient information to make a judgement eg,

use of envelopes described but no indication of other components

Page 19: Assessment of Bias

19

Blinding

Emphasis should be placed on participants, providers, and outcome assessors

Could bias the actual outcomes (eg, differential cross-over) or the assessment of outcomes

All outcome assessment can be influenced, but especially for subjective outcomes

Situations where blinding impossible (eg, oral vs intravenous medications)

Page 20: Assessment of Bias

20

Blinding (continued)

Use of terms like ‘double-blinded’ problematic– You don’t know exactly who was blinded!

What to consider when assessing:– Who was and was not blinded– Risk of bias in actual outcomes due to lack of blinding

during the study (eg, co-intervention or differential behaviour)

– Risk of bias in outcome assessments (subjective vs objective)

Assessment of risks may need to be made for different (groups of) outcomes

Page 21: Assessment of Bias

21

Blinding (continued)

Adequate blinding– Eg, no blinding but the review authors judge that the

outcome not likely to be influenced by lack of blinding– Eg, blinding of participants and key study personnel

ensured and unlikely to have been broken

Inadequate blinding– No blinding or complete blinding and the outcome likely

influenced by lack of blinding

Unclear risk of bias– Insufficient information– Study did not address this outcome

Page 22: Assessment of Bias

22

Allocation concealment vs. blinding

time

randomisation

Concealment of allocation Blinding

Selection bias Performance bias

Page 23: Assessment of Bias

23

Performance bias

systematic differences, other than the intervention being investigated, in the treatment of the two groups

occurs at the time of performing the intervention avoid performance bias by:

– blinding the care provider

– blinding the participant

Page 24: Assessment of Bias

24

Another form of performance bias is inadequate delivery of the intervention

Assess if the study used a process analysis to ensure all participants in the trial received the entire intervention according to the trial protocol by following a manual

E.g. did the researchers visit every classroom and observe to ensure that all students received the entire intervention

E.g. did the researchers ask each participant to assess the quality of the presentation of the intervention

Page 25: Assessment of Bias

25

Incomplete outcome data

Missing outcome data Incomplete outcome data: drop-outs for exclusions ‘Missing’: participant’s outcome is not available Some exclusions may be justifiable and should not

be considered as leading to missing outcome data When possible and appropriate, can reinclude

participant into an analysis (exclusions were inappropriate and data available)

Page 26: Assessment of Bias

26

Attrition bias

systematic differences in the loss of participants to follow up between groups

occurs over the duration of follow up avoid attrition bias by:

– describe proportion of participants lost to follow-up– use intention-to-treat analyses

completeness of follow up– participants lost to follow up, or not included in the outcome

assessment, could be different from those who remained in the trial

Page 27: Assessment of Bias

27

Incomplete outcome data (continued)

Risk of bias depends on several factors, including:– Amount and distribution across intervention

groups– Reasons for missing outcomes– Likely difference in outcome between participants

with and without data– What study authors have done to address the

problem in their reported analyses– Clinical context

Page 28: Assessment of Bias

28

Incomplete outcome data (continued)

Low risk of bias No missing outcome data

– Confident that participants included in the analysis are exactly those who were randomized into the trials

– If numbers randomized not clearly reported, risk of bias is unclear

– Intention-to-treat analyses rare; care with understanding and use of term

Page 29: Assessment of Bias

29

Incomplete outcome data (continued)

Low risk of bias (continued)

Acceptable reasons– Eg, move away – Eg, for survival, censoring done and reason for

censoring unrelated to prognosis– Eg, reasons are reported and balanced across

groups (may not be possible, though, due to incomplete reporting)

Page 30: Assessment of Bias

30

Incomplete outcome data (continued)

Impact of missing data on effect estimates For dichotomous data, depends on the amount of

information missing relative to participants with events– The higher the ratio, the greater the potential for

bias For continuous data, impact increases with the

proportion of participants with missing data

Imputation Common, but potentially dangerous Can lead to serious bias Consult statistician if you encounter in your trials

Page 31: Assessment of Bias

31

Incomplete outcome data (continued)

High risk of bias Importance of considering reasons for incomplete

outcome data…often unavailable, but is likely to improve through use of the CONSORT statement

‘As treated’ (per-protocol) analyses

Page 32: Assessment of Bias

32

Selective outcome reporting

Selection of a subset of the original variables recorded, on the basis of the results, for inclusion in the publication of trials

Concern: statistically non-significant results might be selectively excluded from publication

Bias resulting from selective reporting of different measurements of outcome seem likely eg, published vs unpublished rating scales for schizophrenia

Need to consider whether an outcome was collected but not reported or simply not collected

Page 33: Assessment of Bias

33

Selective outcome reporting (continued)

Bias can occur through selective…: Omission of outcomes from reports: if based on

statistical significance Choice of data for an outcome: if choice of

timepoints or measurement scales based on results Reporting of analyses using same data: eg, choice

of continuous vs dichotomous analysis, final vs change-from-baseline

Reporting of subsets of data: eg, selecting subsets of events

Under-reporting of data: eg, in adequate data for use in meta-analysis (‘not significant’ or p>0.05)

Page 34: Assessment of Bias

34

Selective outcome reporting (continued)

Other items to consider: Comparing the trial report with its published protocol,

if available Checking abstracts of subsequently published trials

for outcomes not in published version Occurrences of missing data that seems sure to

have been collected If suspicion of or direct evidence of selective

outcome reporting, is desirable to ask the study authors for more information

Page 35: Assessment of Bias

35

Selective outcome reporting (continued)

When completing RoB tool, assessment for selective outcome reporting is to be done for the study as a whole even if the bias doesn’t apply to all outcomes

In ‘Description’ part of tool, list those outcomes for which there is evidence of selective reporting

Page 36: Assessment of Bias

36

Other sources of bias

Potential sources of bias should not be included here if more appropriately covered in the previous domains

Use for other sources that are important for considering in your review, examples:– inappropriate influence of funders– inappropriate co-intervention– contamination– selective reporting of subgroups– baseline imbalance in important factors

Page 37: Assessment of Bias

37

Other sources of bias (continued)

Reminder! Use answering convention:– Yes = low risk No = high risk

Page 38: Assessment of Bias

38

Risk of bias in RevMan 5

RoB for RoB for each studyeach study

Page 39: Assessment of Bias

39

RoB in RevMan 5 – a closer look

One line per table (ie, per study)

Page 40: Assessment of Bias

40

RoB in RevMan 5 – a closer look

2 or more entries allowed: assessments for different outcomes

Page 41: Assessment of Bias

41

RoB in RevMan 5 – a closer look

‡ Entries depend on item, whether at study or outcome level

Page 42: Assessment of Bias

42

Example Risk of Bias table

Page 43: Assessment of Bias

43

RoB optional figure (RevMan 5)

Page 44: Assessment of Bias

44

RoB optional figure (RevMan 5)

Page 45: Assessment of Bias

45

Summary assessments

Need to make judgements about which domains are important

Judgements for an outcome within (single study) and across studies (for Summary of Findings tables)

How judgements are reached should be made explicit and should be informed by:– Empirical evidence of bias (Sections 8.5 to 8.14)– Likely direction of bias– Likely magnitude of bias

Page 46: Assessment of Bias

46

Summary assessments (continued)

Next few slides → Possible approach for summary assessments of risk of bias for each important outcome (across domains) within and across studies

Page 47: Assessment of Bias

47

Possible approach

Risk of bias

Interpretation Within a study

Across studies

Low Plausible bias unlikely to seriously alter the results

Low risk of bias for all key domains

Most information is from studies at low risk of bias

Page 48: Assessment of Bias

48

Possible approach (continued)

Risk of bias

Interpretation Within a study

Across studies

Unclear Plausible bias that raises some doubts about the results

Unclear risk of bias for one or more key domains

Most information is form studies at low or unclear risk of bias

Page 49: Assessment of Bias

49

Possible approach (continued)

Risk of bias

Interpretation Within a study

Across studies

High Plausible bias that seriously weakens confidence in the results

High risk of bias for one or more key domains

The proportion of information from studies at high risk of bias is sufficient to affect the interpretation of the results

Page 50: Assessment of Bias

50

Reporting biases

Occur when the dissemination of research findings is influenced by the nature and direction of results

Chapter 10: how to address in a Cochrane review

Page 51: Assessment of Bias

51

Reporting biases (continued)

Page 52: Assessment of Bias

52

Risk of bias exercise