Recruitment and Retention: the evaluator’s role

42
Session 4: Trial management Recruitment and retention: the role of evaluators Meg Wiggins (IoE)

description

Session 4: Trial management Recruitment and retention: the role of evaluators Meg Wiggins ( IoE ). Sub-brand to go here. Recruitment and Retention: the evaluator’s role. Meg Wiggins – Institute of Education, London . Recruiting schools. Retaining schools. Project examples. - PowerPoint PPT Presentation

Transcript of Recruitment and Retention: the evaluator’s role

Page 1: Recruitment and Retention: the evaluator’s role

Session 4: Trial management

Recruitment and retention: the role of evaluatorsMeg Wiggins (IoE)

Page 2: Recruitment and Retention: the evaluator’s role

Recruitment and Retention: the evaluator’s roleMeg Wiggins – Institute of Education, London

Sub-brand to go here

Page 3: Recruitment and Retention: the evaluator’s role

Recruiting schools

Page 4: Recruitment and Retention: the evaluator’s role

Retaining schools

Page 5: Recruitment and Retention: the evaluator’s role

PROJECT EXAMPLES

Page 6: Recruitment and Retention: the evaluator’s role

EXAMPLE 1

Intervention: 30 hours of primary school classroom chess teaching, delivered by external CSC tutors

Cluster trial, randomised at school level

Evaluation team at IoE:John Jerrim (Lead), Lindsey Macmillan, John Micklewright Process evaluation - Meg Wiggins, Mary Sawtell, Anne Ingold

Page 7: Recruitment and Retention: the evaluator’s role

Chess in Schools - Recruitment• Community organisation – small central staff team

• Recruitment expectations – return to known ground

• Recruitment reality – IoE provided lists of schools selected on FSM % criteria, in their chosen Las

• Capacity issues, limited understanding about RCTs, huge enthusiasm for the evaluation

Page 8: Recruitment and Retention: the evaluator’s role

Chess in Schools – Recruitment 2Nearly reached target of 100 primary schools within

tight timeframe

Succeeded by tenacious, labour intensive direct contact by phone

• Often before school; strategies for speaking directly to head teachers

• Ditched letter and emails as first approach• Brought in dedicated person to recruit

Page 9: Recruitment and Retention: the evaluator’s role

Chess in Schools – Recruitment 3

As evaluators we assisted recruitment by: • Providing extra schools from which to recruit• Providing extra time for recruitment• Channeling enthusiasm - providing focus

9

Page 10: Recruitment and Retention: the evaluator’s role

Chess in Schools – Retention in study

• Study designed to limit retention challenges• Influenced by learning from earlier IoE EEF

evaluations• No testing within schools; use of NPD data• Collection of UPNs before randomisation

10

Page 11: Recruitment and Retention: the evaluator’s role

Chess in Schools – Retention in study

• Pre-randomisation baseline head teachers’ survey• Showed some confusion about the trial and intervention

• Limited evaluation involvement in development of materials used in recruitment of schools

• How much were they used?

• Lack of forum for cascading study information beyond head/SLT

11

Page 12: Recruitment and Retention: the evaluator’s role

Chess in Schools – Retention in intervention

Most intervention schools adopted the programme

CSC tell us that nearly all have completed the full 30 week intervention

• End of intervention survey pending of tutors & teachers to confirm this

Case study work flagged variation in schools re: lessons replaced by intervention • Important to study; not critical for schools/Chess tutors

12

Page 13: Recruitment and Retention: the evaluator’s role

Chess in Schools – Lessons learnt• Beyond recruitment – importance of forum for cementing

the key study messages within schools

• Tension between role as impartial evaluator observing from a distance and partner in achieving a successful intervention and evaluation

• Plan some interim formal means of assessing implementation and intervention retention

• Design of the study means that retention issues remain minimal

13

Page 14: Recruitment and Retention: the evaluator’s role

Example 2

Intervention: Training primary class teachers to deliver a curriculum of French lessons as well as follow up activities linking the learning of French to English literacy.

Cluster trial, randomised within schools at class level,

across two year groups (3 & 4)

IoE evaluation team: Meg Wiggins (Lead), John Jerrim, Shirley Lawes, Helen Austerberry, Anne Ingold

14

•Early Language Learning & Literacy (ELLL) Project

Early Language Learning & Literacy (ELLL) Project

Page 15: Recruitment and Retention: the evaluator’s role

Early Language Learning - RecruitmentDesign of study influenced by:

– Tight study timeline – curriculum changes – required post intervention testing

– Extremely short recruitment window prior to commencement of teacher training

– Capacity to deliver intervention to limited numbers

Challenges in determining inclusion criteria for schools• Key issues around specialist language teachers and within

schools randomisation design• Over burdening of London schools – EEF issue15

Page 16: Recruitment and Retention: the evaluator’s role

Early Language Learning - RecruitmentCompromises reached:

– Outside organisation brought in to recruit– London schools allowed– Relaxation of ban on specialist teachers (slight!)

Close liaison between CfBT and evaluation team– Case by case basis recruitment– Development of detailed recruitment materials – FAQs

Minimum target of 30 schools exceeded – 46 randomised16

Page 17: Recruitment and Retention: the evaluator’s role

Early Language Learning - RetentionImmediate post randomisation drop out: 9 schools• 2 couldn’t attend teacher training dates• 2 schools disagreed with randomisation• 5 never responded to invitation to teacher training

Additionally, 4 schools dropped one year group, but stayed in trial with other year group

Within one week – 46 schools reduced to 37!

17

Page 18: Recruitment and Retention: the evaluator’s role

Early Language Learning -Retention

Evaluation team attended each training session and explained study to intervention teachers

– Found almost no knowledge of study had been cascaded down by heads

– Emphasised randomisation and no diffusion– Answered many questions! Learnt from them!– Provided teachers FAQs sheet– Explained plans for end of year testing

18

Page 19: Recruitment and Retention: the evaluator’s role

Early Language Learning - Retention

• Used additional training events to continue evaluation presence

• All 37 schools have delivered (most of) the intervention

• Organising testing dates (mostly by email) has been fairly straightforward

Lots of messages back and forth to finalise Testing begins Tuesday

19

Page 20: Recruitment and Retention: the evaluator’s role

Early Language Learning – Lessons Learnt• Tight recruitment period led to inclusion of schools that

weren’t committed. Role of external recruitment agency?

• Tension between confusing schools with contacts from programme and evaluation teams vs. not having evaluation messages clearly conveyed.

• Need to ensure evaluation messages reach those that deliver interventions, not just to Heads.

• Allowing time and resources for communicating with schools at every stage – no shortcuts to personal contact.

20

Page 21: Recruitment and Retention: the evaluator’s role

Do our experiences tally with yours?

Audience discussion

21

Page 22: Recruitment and Retention: the evaluator’s role

Task - table discussion and feedback

What one top tip or suggestion would you make for recruitment, retention or communication with schools?

22

Page 23: Recruitment and Retention: the evaluator’s role

My conclusions• Design with recruitment and retention at the

fore

• There is no substitution for evaluation team direct contact with schools – allocate resources accordingly

• Be flexible – balance rigour with practicality. Choose your battles!

23

Page 24: Recruitment and Retention: the evaluator’s role

Session 4: Analysis and reporting

Analysis methods and calculating effect sizesBen Styles (NFER)

Analysis Plans: A cautionary taleMichael Webb (IFS)

Page 25: Recruitment and Retention: the evaluator’s role

Analysis and effect sizeBen Styles

Education Endowment FoundationJune 2014

Page 26: Recruitment and Retention: the evaluator’s role

Analysis and effect size

• How design determines analysis methods• Brief consideration of how to deal with

missing data• How to calculate effect size

Page 27: Recruitment and Retention: the evaluator’s role

‘Analyse how you randomise’

• Pupil randomised• The ideal trial: t-test on attainment• Usually have a covariate: regression

(ANCOVA)• Stratified randomisation: regression with

stratifiers as covariates

Page 28: Recruitment and Retention: the evaluator’s role

‘Analyse how you randomise’

• Cluster randomised(think about an imaginary very small trial to

understand why)• t-test on cluster means• Regression of cluster means with baseline

means as a covariate• ‘It’s the number of schools that matters’

Page 29: Recruitment and Retention: the evaluator’s role

BUT

• If we have an adequate number of schools in the trial, say 40 or more

• We have a pupil-level baseline measure• We can use the baseline to explain much of

the school-level variance• Multi-level analysis

Page 30: Recruitment and Retention: the evaluator’s role
Page 31: Recruitment and Retention: the evaluator’s role

Missing data• Prevention is better than cure• Attrition is running at about 15% on average in

EEF trials• Using ad hoc methods to address the problem can

lead to misleading conclusions • http://educationendowmentfoundation.org.uk/uploa

ds/pdf/Randomised_trials_in_education_revised.pdf

• Baseline characteristics of analysed groups• Baseline effect size

Page 32: Recruitment and Retention: the evaluator’s role

Effect size• We need a measure that is universal• The difference between intervention group mean and control group mean• As measured in standard deviations

Page 33: Recruitment and Retention: the evaluator’s role

Effect size

• See EEF analysis guidance at http://educationendowmentfoundation.org.uk/uploads/pdf/Analysis_for_EEF_evaluations_REVISED3.pdf

• Write a spreadsheet that does it for you

79

               

Page 34: Recruitment and Retention: the evaluator’s role

For use in pupil-randomised trials Calculations

Parameter Description Value Pooled outcome SD 8.967

x1bar - x2barIntervention group mean minus control group mean. Usually the regression coefficient for intervention. 0.229249 Correction factor 0.998

Standard error of effect SE of regression coefficient 0.704391 Raw CI (upper) 1.615

Degrees of freedom for CI Same as for residual mean square degrees of freedom 347 Raw CI (lower) -1.16

Standard deviation of treatment group 8.794 Hedges' g 0.03

Standard deviation of control group 9.132 CI (upper) 0.18

Number of cases in treatment group Only cases included in the regression model. 175 CI (lower) -0.13

Number of cases in control group Only cases included in the regression model. 180

Page 35: Recruitment and Retention: the evaluator’s role

But what about multi-level models?• Difference in means is still the model

coefficient for intervention• But the variance is partitioned – which do we

use?• And the magnitude of the variance

components change depending on whether we have covariates in the model – with or without?

Page 36: Recruitment and Retention: the evaluator’s role
Page 37: Recruitment and Retention: the evaluator’s role

Arrggh!

Page 38: Recruitment and Retention: the evaluator’s role

We want comparability

• Always think of any RCT as a departure from the ideal trial

• We want to be able to compare cluster trial effect sizes with those of pupil-randomised trials

• We want to meta-analyse

Page 39: Recruitment and Retention: the evaluator’s role

Which variance to use

• Pupil-level• Before covariates

Page 40: Recruitment and Retention: the evaluator’s role

This is controversial

• Before or after covariates means two different things

• At York on Monday leaning towards total variance but pupil-level better for meta-analysis

• Report all the variances and say what you do

Page 41: Recruitment and Retention: the evaluator’s role

Conclusions

• A well designed RCT usually leads to a relatively simple analysis

• Some of the missing data methods are the domain of statisticians

• Be clear how you calculate your effect size

Page 42: Recruitment and Retention: the evaluator’s role

Analysis Plans: A cautionary taleMichael Webb (IFS)