Welcome to the group.
Thanks given to Professor Steve Dain for agreeing to contribute as a topic expert.
1
12 face to face attendees + 3 through webinar.
2
3
The contemporary definition of evidence‐based practice is shown in the middle of the diagram. A reminder of where this evening sits within the 5 steps of learning the process of EBP.
The previous workshop focused on the first two steps of “ask” and “acquire”. The initial step or ASK involves asking an answerable question. This requires learning to ask focused questions
that lead to effective search strategies. The second step or ACQUIRE involves learning how to design and conduct a thorough search strategy to answer the question that was
formulated.
Tonight will concentrate on the third step or APPRAISE, which involves the critical evaluation of the validity and clinical relevance of the study or papers including concepts such as levels of evidence, appropriateness of study design and statistical analysis. The fourth step or APPLY involves apply the evidence to practice. The fifth step or AUDIT and involves reflection on how well the previous four steps worked. These are not the focus of tonight’s discussion.
4
At the introductory Evidence‐Based Practice Interest Group meeting, we agreed to focus our efforts on the evidence for recommending blue filter spectacles for the prevention of cataract and macular degeneration.
The scenario above illustrates the agreed topic of interest on which we were to construct our focused clinical question.
5
This interest was generated by recent publicity on these products such as the press release illustrated here.
6
Learning to ask focused questions is as simple as 1,2,3,4 using the PICO strategy. Pidentifies the type of patient (e.g. gender, age group, race) and the clinical ‘problem’ (e.g. condition, disease) faced. I is for intervention. C stands for comparison and is used for the comparison intervention or diagnostic strategy. O is for outcome and refers to an outcome measure or indicator of some kind.
Based on the agreed scenario, a focused clinical question might look something like this.
7
Using the strategy described in this handout distributed at the last workshop, the literature can be searched for answers.
8
An example of search results using the PICO terms shown in previous slides is provided here. This generated 15 original articles including 1 abstract.
9
The process of evaluation or appraisal involves the following major steps for each piece of evidence found.
10
A brief overview of the basics of study design was provided.
There can be questions about aetiology or frequency of disease, prognosis, diagnosis, treatment or intervention or patients’ experiences and concerns. Different types of questions are best answered with particular study types. Only one study is very good at answering questions about interventions: the Randomised Controlled Trial or RCT. Even better are pooled analysis of lots of RCTs (e.g. meta‐analysis and systematic reviews).
All other study types are observational. They are good at answering other types of questions e.g. cohort studies and case‐case control studies are good at identifying risk factors that predict disease, at looking at what happens to disease over time and at comparison new tests to old ‘gold standard’. Cohort studies are also good at looking at factors that can cause disease.
11
Cross‐sectional studies are effective at measuring frequency of disease, identifying risk factors and assessing new diagnostic tests.
Qualitative studies explore people’s perception and reasons for doing this.
Case series describe a group of people with similar exposure or similar disease. They are good at generating new hypothesis or new questions about any area (diagnosis, prognosis, intervention, etc).
12
For our PICO, a study design such as an RCT or a meta‐analysis would provide the “best” evidence.
13
Looking back at our previous search results, the articles can then be classified into the various study type of design as shown above.
14
Each of these articles is listed here and in the next two slides, starting with the most trustworthy design (systematic review) and …
15
… finishing with expert report or opinions and …
16
…. laboratory or animal studies.
Other points of note made by the presenter or the audience were that:
1) no paper used spectacle lens as the intervention; all the evidence found focused on blue filtering intraocular lenses as the intervention of interest.
2) The outcome measure for the only existing systematic review was visual performance and NOT the outcome of our PICO (i.e. prevention of cataract or AMD).
17
Critical Appraisal involves asking the following three questions.
Is it true: Did the authors use an appropriate study design to answer the question (e.g. if about the effectiveness of a drug, using a randomised controlled trial)? Did they use the appropriate methods in the study (e.g. an accurate form of tonometry for a glaucoma study)
Is it useful? Is the extra information of clinical importance, or is the benefit so small it is not likely to be appreciated by the patient? (e.g. is the improvement in health that results from taking a new drug important enough to the patient to justify any costs or side‐effects?)
Is it useful for your patient? An effect can be both true and of great importance to the patients entered into a study, and yet be completely inappropriate for your patient. This might be because your patients are different from those in the study (e.g. more or less severely affected by whatever their health condition; or belong to a different age group or gender), or because of some financial or technical barrier (e.g. high cost drugs might work, but cannot be afforded by patients in a poor country).
18
19
One of the two existing RCTs (Kara‐Junior et al, 2011) was chosen as the paper for a practical exercise on critical appraisal.
20
The next 8 slides need to be reviewed in combination with the Critical Appraisal Worksheet tool developed by Associate Professor Mike Bennett and Dr Rachel Thompson for UNSW Medicine (see handout)
Think carefully about the clinical problem the authors are interested in. Can you summarise the question in the 'PICO' format you have learned?
Does the paper report the answer to the question above, or do they actually report an answer to a slightly different question.
What type of question is it? Are both questions the same or not? If not, what might the importance of the difference be?
21
22
This was a question about intervention.
We really need to know if the study design is appropriate for the type of question we are trying to answer. If the study type is unsuitable, there is probably no purpose in continuing to read the paper! You might be better off trying to find a more valid paper that answers the clinical question.
23
24
A clue is often given in the introduction, or perhaps when we read discussion. Have the authors used a subgroup of the people they are interested in?
Remember ‐ external validity is about whether the results could be confidently applied to patients you are treating.
Be as exact as you can about how this was done and when. Remember ‐ internal validity is about whether you have confidence in the results for the patients in the study.
25
26
27
28
Could you imagine being able to use exactly the same factor after reading the paper, or would you need more information in order to do this (e.g. you do not know the dose they used)?
The 'best' outcomes are usually directly important to the patient. For Albert, that might mean visual acuity rather than number of drusen at macula.
Confounders are a topic for another lecture maybe.
29
30
31
This is usually the difference between the groups in finding the primary outcome. This is often expressed as a percentage (e.g. 20% in group A versus 30% in group B: the difference is 10% with fewer people having the outcome in group A).
Such figures are often associated with a P‐value (the probability that the difference is due to chance). It is in one of the tables in this paper. Is the difference important to your patients? This is always a difficult question to answer for sure!
Confidence intervals tell just how big or small a difference might be. Roughly speaking, there is only a 2.5 per hundred (%) chance that the 'truth' is smaller than the lower end of the limit, and the same for a bigger difference above. The 'truth' could be anywhere in between, though!
Power represents the chance of finding a pre‐specified difference. When used to maximum advantage, authors use the smallest difference they feel will be important to their patients, then work out how big a study they need to do to be confident of finding that difference.
32
33
Do you believe the interpretation is correct, or would you have put it differently?
34
How will you apply it to Albert?
35
36
A consensus was reached that an editorial piece on the need for properly designed studies supporting the effectiveness of blue light filtering spectacle lenses was timely. Members Jalbert, Suttle, Dain, Long have volunteered to steer this. Interested members are invited to contact Dr Jalbert to join the writing committee. Members in clinical practice would be particularly welcome.
Drs Jalbert & Suttle to canvas the whole membership of EBP Interest Group for suggestions for future topics for critical appraisal.
Feedback was sought and given by members attending on positive and negative aspects of the combined face‐to‐face webinar format. Overall feedback is favourable to this format with several suggestions for improvements being provided.
Next meeting date: Friday May 23rd 2014 with guest presenter Dr Catherine Suttle from City University, UK.
37
Top Related