The SO WHAT Factor Impact Evaluation Strategies for Teacher Educators

download The SO WHAT Factor Impact Evaluation Strategies for Teacher Educators

of 30

Transcript of The SO WHAT Factor Impact Evaluation Strategies for Teacher Educators

  • 8/14/2019 The SO WHAT Factor Impact Evaluation Strategies for Teacher Educators

    1/30

  • 8/14/2019 The SO WHAT Factor Impact Evaluation Strategies for Teacher Educators

    2/30

    Purpose

    Purpose

    This paper arose out of the Training and Development Agency for Schools (TDA)

    nformation and communication technology (ICT) in initial teacher training (ITT)

    mpact evaluation project 2008/09. It aims to provide an accessible resource

    with practical ideas and models for evaluating the impact of a technology

    ntervention from inception to completion.

    One of the key ndings of the evaluation was that one size denitely doesntt all when selecting the framework or model of evaluation.

    Although intended for teacher educators, this paper will be relevant to

    anyone in education interested in assessing the impact of technology in

    teaching and learning. The ideas should be useful to those implementing a

    range of innovative projects who want a customisable evaluation that

    covers the breadth of creative work occurring in ITT with ICT. The evaluationmethods used in the ICT in ITT project can be seen in the main report at

    www.tda.gov.uk/techforteaching, where we also put forward an embryonic

    model for determining project success factors.

    AcknowledgmentsThis project was conceived and funded in support of the TDA evaluation by Becta. Becta has been working with the TDA toupport the evaluation advisory group. Becta also funded the evaluation team at the University of Wolverhampton to produce

    additional materials on evaluating the impact of a technology intervention. Thanks are due to Malcolm Hunt at Becta, whoguided the process, and to Dr Michael Stokes at the University of Wolverhampton, who conceptualised the document anddrew the strands together. The complex logic model referred to in this document was derived from University of Wisconsin (UW)Extension logic model, with kind permission.

    2

  • 8/14/2019 The SO WHAT Factor Impact Evaluation Strategies for Teacher Educators

    3/30

    Contents

    Contents

    Section one Evaluation what is it and why do it? 4

    Section two Guiding principles for evaluation 6

    Section three Major frameworks and evaluation models: 7Kirkpatricks evaluation of training model

    Guskeys ve levels of professional development evaluation

    Logic frame model evaluationSelf review framework for ICT

    The test-bed e-maturity model

    Section four Evaluation tools: 16The ve phases of ICT adoption

    E-maturity models

    Technology a vehicle for enquiry-based learning

    Section ve Information on evaluation 19Examples of resources available to support and guide evaluation of ICT

    Section six An example of the use of an evaluation framework 24Evaluation factors

    Findings

    References 28

    3

  • 8/14/2019 The SO WHAT Factor Impact Evaluation Strategies for Teacher Educators

    4/30

    Section one: Evaluation what is it and why do it?

    People use different terminology when they are talking about evaluation and people

    have different perspectives on the nature and purpose of evaluation. According to

    Bennett (2003, p15), while there has been ongoing debate for several decades over

    the nature and purpose of evaluation, he recognises that evaluation forms an

    important area of research in education. Easterby-Smith (1986, p13) adds his ownthree reasons for evaluating more succinctly as:

    proving

    improving, and

    learning.

    This document aims to make the purpose of evaluation and the approaches to

    evaluation clearer by concentrating on the evaluation of ICT in education. Denitions

    of evaluation abound and Bennett (2003) offers 13 without concluding an overall

    denition. One biased towards education evaluation is from Nevo (1995, p11),

    who suggests that it is an act of collecting systematic information regarding the

    nature and quality of educational objects, which suggests that it is a combination

    of description and judgement. The UK Evaluation Society (1994) also highlights the

    collection of information in saying evaluation is an in-depth study which takes

    place at a discrete point in time, and in which recognised research procedures are

    used in a systematic and analytically defensible fashion to form a judgement on

    the value of an intervention.

    How such collection of information or research is organised may direct us to Scrivens

    (1967) idea of having two forms of evaluation: formative evaluation, which would

    support the development of your project, and summative evaluation, for assessing

    the nal impact of a project. Goodall et al (2005, p37) supported this: Effectiveevaluation of continuing professional development (CPD) will usually need to serve

    two main purposes: summative evaluation (does the programme/activity improve

    outcomes?) and formative assessment (how can the programme/activity be

    improved?). They go on to be critical of CPD evaluation practice and offer their own

    model of evaluation, the route map (found in the examples of evaluation practice in

    this report in section 5).

    In considering ICT in education, the formative function will include the evaluation

    of instructional materials and pedagogic processes. This may relate to either the

    development or use of materials and delivery of learning. A denition that appears

    to be relevant to ICT issues in education is from Stern (1988), who suggests:

    Evaluation is any activity that, throughout the planning and delivery of innovative

    programmes, enables those involved to learn and make judgements about the

    starting assumptions, implementation processes, and outcomes of the innovation

    concerned. Guskey (1998) offers his denition of evaluation (adapted from the

    Joint Committee on Standards for Educational Evaluation, 1994, p1): Evaluation

    is the systematic investigation of merit or worth, proposing that it is a structured

    and a measured and measurable approach. Chelimsky (1997, p101) sums up why

    we evaluate in stating: We look to evaluation as an aid to strengthen our practice,

    organisation and programmes. In order to do this, all critics agree that any reason orreasons for the evaluation should be stated before any evaluation takes place.

    This is reinforced by Guskey (2002), who reminds us that good evaluation is built in

    from the outset of the professional development programme or activity, not added

    Section one:Evaluation

    what is it andwhy do it?

    4

  • 8/14/2019 The SO WHAT Factor Impact Evaluation Strategies for Teacher Educators

    5/30

    on at the end. The Research Councils UK (2005) emphasise this too in conrming

    that evaluation is a process that takes place before, during and after a project.

    It includes looking at the quality of the content, the delivery process, and the impact

    of the project or programme on the audience(s). Some evaluation frameworks

    incorporate a model planning process for a project as well as an evaluationframework for the project, eg, logic frame models.

    Guskey (2002, p1) helps to explain Why evaluate?: The processes and procedures

    involved in evaluation present an endless list of challenges that range from very

    simple to extremely complex. Well-designed evaluations are valuable learning tools

    that serve multiple audiences. They inform us about the effectiveness of current

    policies or practices, and guide the content, form, and structure of future endeavours.

    Poorly designed evaluations, on the other hand, waste time, energy and other

    valuable resourcesgood evaluations do not have to be costly, nor do they requiresophisticated technical skills. What they require is the ability to ask good questions

    and a basic understanding about how to nd valid answers. Good evaluations provide

    information that is sound, useful, and sufciently reliable to use in making thoughtful

    and responsible decisions about projects, programs, and policies.

    5

    Section one: Evaluation what is it and why do it?

  • 8/14/2019 The SO WHAT Factor Impact Evaluation Strategies for Teacher Educators

    6/30

    In carrying out evaluations, participants should decide why and how they will carry

    them out. Drawing on the experience of CeDARE, Hadeld (2008) proposes ve sets

    of principles that participants should consider for any evaluation:

    1. Identifying their focus and purpose of evaluationEvaluations should:

    cover the four key levels of access and participation, participant learning,

    participant behaviour, and organisational impact

    have clear foci that are at least in part co-constructed with participants and

    address their needs as well as those of providers

    be directed towards outcomes which can be communicated to and used by

    key stakeholders within the theme, and

    balance the amount of effort to conduct them with the potential benet of

    their outcomes.

    2. Building on what is already known

    Evaluations should:

    have convincing arrangements for accessing and building upon existing evidence

    and knowledge of effective practice, and

    should wherever possible use existing frameworks and tools that are already live

    within the system.

    3. Gathering evidence

    Evaluations should:

    try as far as possible to reuse and/or increase use of relevant evidence that hasalready been collected

    ensure, as far as possible, that the process of collecting any new evidence is a

    learning experience for those involved

    have clear strategies for triangulation, by collecting different sorts of evidence

    from different groups in more than one context, and

    follow recognised ethical guidelines for both collection and storage.

    4. Analysing and interpreting

    Evaluations should:

    analyse existing data before collecting additional forms

    use or adapt existing frameworks if they are well recognised and regarded

    balance a search for consistent themes with contradictory messages and the

    unexpected outcomes, and

    include practical arrangements for checking interpretations and summaries.

    5. Communication and feedback

    Evaluations should:

    report back in forms and ways that are accessible and appropriate to

    key audiences

    where possible, use short timely feedback loops rather than rely on summativefeedback, and

    generate a short summary of key learning and impact that can be fed to others.

    Section two:Guiding principles

    for evaluation

    6

    Section two: Guiding principles for evaluation

  • 8/14/2019 The SO WHAT Factor Impact Evaluation Strategies for Teacher Educators

    7/30

    This section (bearing in mind the principles above) identies some major frameworks

    for evaluation and provides links to approaches and models of practice in evaluation

    for use in a variety of situations. It will draw on methods of practice from the

    research by CeDARE (2009) ICT in ITT survey analysis report and on selected

    examples from the literature and the internet.

    The evaluation research that provided the stimulus for this paper used evaluation

    models developed by Kirkpatrick for evaluating training. This also included approaches

    for impact evaluation based on the work of Hooper and Reiber (1995) and Fisher

    (2006) for applying this evaluation to ICT development and impact on trainees and

    trainers (details on the Kirkpatrick evaluation model are found in section 4).

    Evaluation models

    Kirkpatricks evaluation of training model

    Kirkpatrick developed his four-step model for the evaluation of training anddevelopment in business organisations and, according to this model, evaluation

    should begin at level one and then, as time and budget allows, should move

    sequentially through levels two, three and four. Each successive level represents a

    more precise measure of the effectiveness of the training programme, but at the

    same time requires a more rigorous and time-consuming analysis. The model consists

    of four stages, originally described as steps but since 1996 considered as levels,

    and is applicable for all forms of programme evaluation, including ICT in ITT.

    Level one: reactions what the participants in the programme felt about the

    project/programme, normally measured by the use of reaction questionnaires

    based upon their perceptions. Did they like it? Was the material relevant to

    their work? A tool such as a happy sheet is often utilised at this level. Level one

    evaluation is viewed by Kirkpatrick as the minimum requirement, providing some

    information for the improvement of the programme.

    Level two: learning this moves the evaluation on to assessing the changes in

    knowledge, skills or attitude with respect to the programme/project objectives.

    Measurement at this level is more difcult, and formal or informal testing or

    surveying is often used, preferably pre- and post-programme.

    Level three: behaviour evaluating at this level attempts to answer the question:

    are the newly acquired skills, knowledge or attitude being used in the everyday

    environment of the learner? Measuring at this level is difcult as it is often not

    easy to predict when the change of behaviour will occur, and therefore important

    decisions may have to be made as to when to evaluate, how often to evaluate

    and how to go about the evaluation. In the ICT in ITT project, questionnaires to

    determine changes in practice were used, with questions based on a modied

    e-maturity scale from the work of Hooper and Reiber (1995).

    Level four: results this level seeks to evaluate the success of the programme in

    terms of results for the organisation, usually stated in improvements in quality.

    Determining the improvements in quality of practice is probably the most difcultaspect of their evaluation framework.

    (Summary adapted from Tamkin P, Yarnell J, and Kerrin, M 2002)

    Section three:Major frameworks

    and evaluationmodels

    7

    Section three: Major frameworks and evaluation models

  • 8/14/2019 The SO WHAT Factor Impact Evaluation Strategies for Teacher Educators

    8/30

    Arguments for the use of this model

    The four-level model can facilitate professional development evaluations because

    it describes how evaluation can be conducted and how it can be useful at each

    level. There are lots of examples of its use worldwide and it is practical and simple

    to use.

    Arguments against the use of this model

    The main criticisms of the approach are based on the fact that the model has been

    used mainly at level one the satisfaction of learners in the training they have

    received. It is also considered that there is an immediate reactive response from

    learners at the end of their training that does not clearly link to the other levels.

    Such an evaluation may be useful for trainer satisfaction but may not help identify

    what has been learned.

    CommentAccording to the study by Yamkin et al (2002, p.xiii) the overall conclusion is that

    the [Kirkpatrick] model remains very useful for framing where evaluation might be

    made. The CeDARE ICT in ITT survey analysis used the multi-levels of the Kirkpatrick

    model to determine what was already known from reviewing previous project

    evaluations of ICT data collection and the identication of a suitable sampling

    framework for investigation at a greater depth, ie, at Kirkpatricks levels three and

    four framework.

    Guskeys ve levels of professional development evaluation

    Guskey decided to modify Kirkpatricks model for use on evaluating staff

    development in education. He comments that the Kirkpatrick model had only

    limited use in education because it lacked explanatory power. It was seen as

    helpful in addressing a broad range of what questions, but fell short when it comes

    to explaining why. This new ve-step model (see table 1 below) is one that was

    advocated in a study by Goodall et al (2005), who noted that Guskeys model was

    adapted from Kirkpatricks (1959) model. Goodall et al went on to suggest their own

    route map, drawing from their experience of reviewing the work of Guskey.

    8

    Section three: Major frameworks and evaluation models

  • 8/14/2019 The SO WHAT Factor Impact Evaluation Strategies for Teacher Educators

    9/30

    S

    Adapted from: Guskey, TR (2000).

    Evaluation level What questions are addressed?

    (examples)

    How will information be gathered?

    (examples)

    1. Participants reactions Did they like it? Usually questionnaire at the end of

    the session

    2. Participants learning Did participants learn what

    was intended?

    Assessments, demonstrations, reection

    portfolios

    3. Organisation support and change What was the impact on the organisation?

    Were sufcient resources made available?

    Mentors or coaches used?

    Questionnaires, minutes of meetings,

    interviews, focus groups

    4. Participants use of new knowledgeand skills

    Did participants effectively apply thenew skills?

    Questionnaires, interviews, reection,observation, portfolios

    5. Student learning outcomes What was the impact on students?

    Did it affect student achievement?

    Did it inuence student well-being?

    Is student attendance improving?

    Student records/results, questionnaires

    participant, portfolios, focus groups

    Table 1.

    Five levels of professional development evaluation

  • 8/14/2019 The SO WHAT Factor Impact Evaluation Strategies for Teacher Educators

    10/30

    Arguments for the use of this model

    It is designed for staff development in an educational context. The end product is a

    model that is very useful in guiding the implementation and evaluation of a program.

    It is straightforward to use.

    Arguments against the use of this model

    As with Kirkpatrick the model is said to be simplistic. There is also no recognition of

    the time-lag necessary between the rst three levels and the last two. To evaluate

    levels four and ve requires the new knowledge or skills identied in levels one to

    three to be applied in practice and to have an impact on students learning outcomes.

    These learning outcomes will have to be recognised and measured over time in order

    to evaluate whether the intervention has brought about new teaching approaches

    that have been embedded and are successful.

    Comment

    In using this model Guskey suggests that you start with the questions at level ve

    as a basis for planning your evaluation. A recent study from Davis et al (2009, p146)

    conrmed that multi-level evaluation of professional development does indeed

    apply to ICT-related teacher training. Therefore we recommend that all ve of

    Guskeys levels be consistently adopted for the evaluation of ICT training.

    Logic frame model evaluation

    A logic model presents a picture of how your effort or initiative is supposed to work.

    It explains why your strategy is a good solution to the problem at hand. Effective

    logic models make an explicit, often visual, statement of the activities that will bring

    about change and the results you expect to see for the community and its people.

    A logic model helps maintain the momentum of the planning and evaluation process

    and participant involvement by providing a common language and point of reference.

    A detailed model indicates precisely how each activity will lead to desired changes.

    In the UK, the logic frame model for evaluation has usually been used for planning

    and evaluating large-scale projects in developing countries, however, it is now seen

    as a relevant model for whenever evaluation is considered.

    A logic model is a plausible, sensible model of how a programme is supposed to

    work (Bickman, 1987, p5). It serves as a framework and a process for planning to

    bridge the gap between where you are and where you want to be. It provides a

    structure for clearly understanding the situation that drives the need for an initiative,

    the desired end state, and how investments are linked to activities for targeted people

    in order to achieve the desired results. A logic model is the rst step in evaluation.

    The logic model describes the sequence of events thought to bring about benets

    or change over time.

    10

    Section three: Major frameworks and evaluation models

  • 8/14/2019 The SO WHAT Factor Impact Evaluation Strategies for Teacher Educators

    11/30

    The elements of the logic frame model are resources, outputs, activities, participation,

    short-, medium- and longer-term outcomes, and the relevant external inuences,

    (Wholey, 1983, 1987). Sunra et al (2003, p6) describe the logic model as a visual

    link of programme inputs and activities to programme outputs and outcomes, and

    shows the basic (logic) for these expectations. The logic frame model is an interactivetool, providing a framework for programme planning, implementation and evaluation,

    and was one of the models reected on by Giaffric Ltd (2007) in constructing

    its evaluation model for the Joint Information Systems Committee (JISC).

    See the complete model in section ve of this document.

    At its simplest, the logic model may be illustrated by diagram 1.

    In practice the diagram is likely to end up being more complex as each of the areas

    under consideration are set out in more detail. See diagram 2.

    11

    Section three: Major frameworks and evaluation models

    Input

    Programme

    investmentsActivities Participation Short Medium

    Long-

    term

    Outputs Outcomes

    Diagram 1.

    A simple logic frame model

  • 8/14/2019 The SO WHAT Factor Impact Evaluation Strategies for Teacher Educators

    12/30

    Diagram 2.

    A more complex logic frame model Program Action Logic Model

    Inputs

    Situation

    Outputs Outcomes

    Participation Short Term MediumActivities

    Needs and

    assets

    Symptoms

    versusproblems

    Stakeholder

    engagement

    Priorities

    External FactorsAssumptions

    What thmediumresults

    What theshort termresults are

    What we reachWhat we doWhat weInvest

    IntendedOutcomes

    Consider:

    Mission

    Vision

    Values

    Mandates

    ResourcesLocal dynamics

    Collaboration

    Competition

    Staff

    Volunteers

    Time

    Money

    Research base

    Materials

    Equipment

    Technology

    Partners

    Participants

    Clients

    Agencies

    Decision-

    makersCustomers

    Satisfaction

    Learning

    Awareness

    Knowledge

    Attitudes

    Skills

    Options

    Aspirations

    Motivations

    Action

    Behavio

    Practice

    Decisionmaking

    Policies

    Social A

    Conductworkshops,meetings

    Deliverservices

    Develop

    products,curriculum,resources

    TrainProvide

    counsellingAssessFacilitiatePartnerWork with

    media

    EvaluationFocus Collect Data Analyse and Interpret Report

    This diagram is taken from the UW-Extension logic model (2008) used with kind permission from UW-Extension.

    S

  • 8/14/2019 The SO WHAT Factor Impact Evaluation Strategies for Teacher Educators

    13/30

    Arguments for the use of this model

    It integrates planning, performance measurement and evaluation in one model.

    A logic frame model describes a programme and its theory of change. It is useful in

    helping to focus an evaluation. Furthermore, suggest Taylor-Powell and Henert (2008),

    the process can facilitate team building and stakeholder buy-in, as well as ensuring

    that implicit program assumptions are made explicit.

    Evaluators have found the logic frame model process useful in a wide range of small

    and complex programmes and interventions in industrial, social and educational

    contexts. A logic frame model presents a plausible and sensible model of how

    the programme will work under certain conditions to solve identied problems

    (Bickman, 1987). Thus the logic frame model is the basis for a convincing story of the

    programmes expected performance. A manager has to both explain the elements of

    the programme and present the logic of how the program works. Patton (1997) refers

    to a programme description such as this as an espoused theory of action, that is,stakeholder perceptions of how the programme will work.

    Arguments against the use of this model

    The logical approach does suggest that it is too simple as an evaluation framework

    as it appears to assume that all projects are linear. It is perceived as rigid and can

    lead to the simplication of complex social processes.

    The structure of the logic frame model suggests that everything will go according to

    plan programme activities, outcomes and goals are all laid out in advance, as are

    indicators with which to monitor these. As such, there is no provision for a change

    in project direction nor a space for learning to be fed into project implementation.Although the logic frame model can be altered during the course of a project, many

    commentators note that they are rarely revisited (Earle, 2003, p2).

    The most common limitations include a logic frame model represents intention

    it is not reality. It focuses on expected outcomes, so people may overlook unintended

    outcomes (positive and negative).

    Comment

    Evaluators have played a prominent role in using and developing the logic frame

    model. This may be why it is often called an evaluation framework. Developmentand use of logic model concepts by evaluators continues to result in a broad array

    of theoretical and practical applications, say Taylor-Powell and Henert (2008).

    The self-review framework for ICT

    This framework has been designed specically for use by schools to assess the

    e-maturity of the school as an institution. The framework divides into eight

    elements which will support and challenge a school to consider how effectively

    it is using ICT. Staff from schools are able to sign up to use the framework on the

    Becta website: https://selfreview.becta.org.uk/about_this_framework

    On registering to use the framework the site offers clear guidelines for using it inyour own context. There are also case-studies and video clips which are available

    to support and challenge your school/organisation.

    13

    Section three: Major frameworks and evaluation models

  • 8/14/2019 The SO WHAT Factor Impact Evaluation Strategies for Teacher Educators

    14/30

    1. Leadership and management

    Develop and communicate a shared vision for ICT.

    Plan a sustainable ICT strategy.

    2. Curriculum

    Plan and lead a broad and balanced ICT curriculum.

    Review and update the curriculum in the light of developments in technology

    and practice.

    Ensure pupils ICT experiences are progressive, coherent, balanced and consistent.

    3. Learning and teaching

    Plan the use of ICT to enhance learning and teaching.

    Meet pupils expectations for the use of ICT.

    Encourage teachers to work collaboratively in identifying and evaluating the

    impact of ICT on learning and teaching.

    4. Assessment

    Assess the capability of ICT to support pupils learning.

    Use assessment evidence and data in planning learning and teaching across the

    whole curriculum.

    Assess the learning in specic subjects when ICT has been used.

    5. Professional development

    Identify and address the ICT training needs of your school and individual staff.

    Provide quality support and training activities for all staff in the use of ICT sharing

    effective practice.

    Review, monitor and evaluate professional development as an integral part of the

    development of your school.

    6. Extending opportunities for learning

    Understand the needs of your pupils and community in their extended use of ICT.

    Ensure provision is enhanced through informed planning, resulting in quality of

    use of ICT within and beyond the school.

    Review, monitor and evaluate opportunities to extend learning within and beyond

    your school.

    7. Resources

    Ensure learning and teaching environments use ICT effectively and in line with

    strategic needs.

    Purchase, deploy and review appropriate ICT resources that reect your school

    improvement strategy.

    Manage technical support effectively for the benet of pupils and staff.

    8. Impact on pupil outcomes

    Demonstrate how pupils can make good progress in ICT capability.

    Be aware of how the use of ICT can have a wider positive impact on pupils progress.

    Review pupil attitudes and behaviour and how the use of ICT can impact

    positively on pupil achievement.

    14

    Section three: Major frameworks and evaluation models

  • 8/14/2019 The SO WHAT Factor Impact Evaluation Strategies for Teacher Educators

    15/30

    Arguments for the model

    The Becta site offers a number of positive comments from users about the use of

    the model.

    Arguments against the model

    There are no negative comments about the model on the Becta site.

    Comment

    The Next Generation Learning Charter is a four-level scheme to encourage schools

    engagement with, and progress through, the self-review framework. On registering

    with the framework, a school is asked to sign the charter, saying they will undertake

    a review of the use of ICT in the school during the next three years. When a school

    has reached a benchmark level in three of the eight elements, it can receive a

    recognition level certicate. The ICT mark accreditation is reached after an assessors

    visit conrms that the school has reached the nationally agreed standard in all eight

    elements of the framework. The criteria for judging the ICT excellence awards are

    based on the highest levels in the framework, and form the top level of the charter.

    https://selfreview.becta.org.uk/about_next_generation_learning_charter

    The test-bed e-maturity model

    This e-maturity model (eMM) has been identied and used successfully in other

    project evaluations. The details of the e-maturity models developed by a team

    from Manchester Metropolitan and Nottingham Trent Universities in their ICT

    test-bed project can be viewed at:

    www.evaluation.icttestbed.org.uk/methodology/maturity_model

    The evaluation assessed the effectiveness of the implementation of ICT in

    educational organisations in relation to ve key themes. The evaluation comprises

    a range of methodologies, including a survey, maturity model, action research,

    qualitative investigation, and benchmarking performance data. The development of

    the maturity models was funded by Becta/DfES and copyright of the models remains

    with Jean Underwood and Gayle Dillon (authors). Permission to reproduce the

    models, or any part of them, must be sought from the authors directly.

    15

    Section three: Major frameworks and evaluation models

  • 8/14/2019 The SO WHAT Factor Impact Evaluation Strategies for Teacher Educators

    16/30

    These are the elements of evaluation that provide the data to evaluate the indicators,

    processes and outcomes of ICT-based projects. Such evaluation tools sit within the

    broad structure of an evaluation model and provide the detailed data from which

    conclusions may be drawn.

    In the development of evaluation methodology it is important to ensure that

    we develop research designs that capture what is important rather than what is

    measurable (Coburn, 2003, p9). For this we have to consider a number of factors

    and, in her research, Coburn has identied four aspects of scale that she considers

    are vital to the success of projects designed to bring about reform in practices. Scale

    is usually considered as the increasing take-up of a particular reform and, in her

    research on teaching and learning reform in schools, she suggests that evaluators

    should be redening scale in four dimensions as current views are too limiting and

    take-up does not indicate change. The four dimensions of scale are:

    depth relates to the impact and recognition that the reform has on the individual,ie, changed their behaviour, understand and use the new pedagogy of the reform

    sustainability is the capacity of the organisation increased to enable all staff

    to maintain these changes?

    spread describes the reform in terms of the understanding and acceptance

    of its principles and norms, not just to schools but to local authorities and

    collaborative groups, and

    shift in reform ownership no longer an external reform controlled by a reformer

    but becomes an internal reform with authority held by the school and teachers

    within the school who have the capacity to sustain, spread and deepen the reformprinciples themselves.

    The identication and measurement of these dimensions requires a range of complex

    tools, some of which are available and some of which have to be developed in order

    to gather the data that will inform the evaluation of each of these dimensions.

    Some of these issues were identied and measured in the CeDARE evaluation

    methodology. The summary of the following evaluation methods is from the CeDARE

    (2009a) ICT in ITT survey analysis. They formed some of the tools required to

    categorise data and dene and measure objectives in the survey.

    The ve phases of ICT adoption

    The research of Hooper and Reiber (1995) has helped to support the recognition

    of indicators of the capacity of staff to spread and own changes in the use of ICT.

    They proposed a model of technology in the classroom that was set out in what

    they dened as the ve phases of adoption of ICT by staff. These are the phases:

    1. Familiarisation

    A teachers initial experience with ICT. A teacher participates in an ICT training

    programme but does not then go on to use the information.

    2. Utilisation

    A teacher tries out the ICT in their classroom but does not expand on its use.

    If the technology was taken away on Monday, hardly anyone would notice

    on Tuesday.

    Section four:Evaluation tools

    16

    Section four: Evaluation tools

  • 8/14/2019 The SO WHAT Factor Impact Evaluation Strategies for Teacher Educators

    17/30

  • 8/14/2019 The SO WHAT Factor Impact Evaluation Strategies for Teacher Educators

    18/30

    has resulted in the development of the Generator, a technology improvement

    leadership tool for further education and skills http://feandskills.becta.org.uk/

    display.cfm?page=1897

    A common approach for all providers including colleges, work-based learning

    organisations, and adult and community education centres is now out

    for consultation.

    The initial model is built around four levels of maturity:

    beginning

    developing

    performing, and

    outstanding.

    There is also a self-development package on the use and development of eMM from

    a University of Wellington website www.utdc.vuw.ac.nz/research/emm/

    The underlying idea that guides the development of the eMM is that the ability of

    an institution to be effective in any particular area of work is dependent on their

    capability to engage in high-quality processes that are reproducible and able to be

    extended and sustained as demand grows.

    This site provides a step-by-step guide to develop your evaluation questions

    on capability.

    A number of pilots have shown that an e-maturity model has the potential to

    identify the development and capacity of reform in an organisation. Chapman (2006)

    carried out a pre-event questionnaire using the levels in the e-maturity FE and skills

    developmental model, followed up with a post-event questionnaire some 18 months

    later. The effects of the training and its inuence on change in pedagogy could be

    clearly recognised from this evaluation.

    Technology a vehicle for enquiry-based learning

    Fisher et al (2006) has endeavoured to determine how teachers might learn with

    digital technologies using the work of Shulman and Shulman, who propose that ICT

    affords learners the opportunity to engage with activities. Trainee teachers and their

    learners may discover that technology provides a suitable vehicle for enquiry-based

    learning in which the teachers have changed learning practice and collaborate in

    the learning process. How this might be recognised in an evaluation of the use of

    technology could be by noting if teachers are ready, willing and able to teach as

    a result of their affordance of learning, using what Loveless (2006) calls clusters

    of purposeful activity. These are separated into vision for education, motivation to

    learn and develop practice, professional knowledge, understanding and practice, and

    reection and learning in community as a basis of questions of individuals or focusgroups. There is an example of its use in a questionnaire in the CeDARE (2009a) ICTin ITT survey, see questions 13 and 14.

    18

    Section four: Evaluation tools

  • 8/14/2019 The SO WHAT Factor Impact Evaluation Strategies for Teacher Educators

    19/30

    This section outlines sources of further information on tools and ideas for evaluating

    ICT from the UK and elsewhere.

    Some ICT specic models of evaluation practice, including data collection methods

    and approaches to assessment, may be found in handbooks from a number of

    sources. The methodologies are too detailed and comprehensive to review in this

    document, and this section provides you with a list of websites and titles that may

    be accessed for further comprehensive information. As with other areas of ICT

    the models are subject to change and development. Major sources of advice and

    information on evaluation methodology for ICT will be found on the Becta

    www.becta.org.uk and JISCjisc.ac.ukwebsites.

    Examples of resources available to support andguide evaluation of ICT

    n Educators guide to evaluating the use of technology in schoolsand classrooms

    Link:www.gao.gov/policy/10_1_4.htm

    Sponsor: American Institutes for Research for US Dept of Education

    Scope: Evaluating technology use in elementary and secondary schools

    Audience: Anyone conducting technology evaluation in schools

    Format: Available both as web pages and Adobe pdf document.

    n The learning technology dissemination initiative (LTDI)

    Link:www.icbl.hw.ac.uk/ltdi/evalstudies/es_all.pdf

    Scope: A range of case-studies and ideas of evaluation in one downloadable textOverview: The LTDI has put together a collection of papers LTDI: evaluation

    studies on evaluation that offer a number of case-study examples. The paper from

    Professor Barry Jackson, Middlesex University, Evaluation of learning technology

    implementation, is particularly relevant.

    n A guide to logical model development

    Link:www.ojp.usdoj.gov/BJA/evaluation/guide/documents/cdc-logic-model-

    development.pdf

    Scope: Sundra DL, Scherer J, Anderson LA (2003) present A guide to logic model

    development for CDCs Prevention Research Center.

    Overview: This is a website from the USA which has a very helpful guide to the

    production of a logic model framework and lots of case-study examples.

    n A practical guide to evaluation

    Link:www.rcuk.ac.uk/aboutrcuk/publications/corporate/evaluationguide.htm

    Scope: This is, as it states, a practical guide to anyone drawing up an evaluation

    of a technology project.

    Overview: This guide is designed for those who lead projects intended to engage

    general audiences in science, social science, engineering and technology and the

    social, ethical and political issues that new research in these areas raises. It isintended to help project managers evaluate individual projects, regardless of their

    experience of evaluation.

    Section ve:Information on

    evaluation

    19

    Section ve: Information on evaluation

  • 8/14/2019 The SO WHAT Factor Impact Evaluation Strategies for Teacher Educators

    20/30

    n A practical guide to evaluation methods for lecturersLink:www.icbl.hw.ac.uk/ltdi/ltdi-pub.htm#Cookbook

    Scope: This offers step-by-step guides to a range of approaches to evaluation.

    Overview: It includes guides to the time, resources and process involved in different

    evaluation methods, with hints relating to the stages of the process and links to

    related pages.

    Information pages aim to provide some basic practical suggestions and advice,

    applicable to a range of different evaluation methods.

    Preparation pages. Sections have been included to provide a framework to the

    planning and preparation process involved prior to carrying out an evaluation.

    These aim to encourage you to think in more detail about who the evaluation is

    for, what you are going to be evaluating, and how best you might carry out such

    an evaluation study.

    Testing, rening and presentation pages: encourage you to think of your

    evaluation study as an ongoing process used to make improvements in teaching

    and learning. Guidance is provided to encourage you to reect on ways in which

    you can act on your results and/or write up your ndings in an evaluation report.

    n The JISC handbook on evaluation commissioned by JISC from Glenaffric Ltd(2007). Six steps to effective evaluation: a handbook for programme and

    project managers

    Link:www.jisc.ac.uk/media/documents/programmes/digitisation/

    SixStepsHandbook.pdfScope: This offers a logic model framework approach for evaluating technology

    projects (see diagram 4). Glenaffric Ltd (2007, p1) states that this handbook may

    be useful for anyone engaged in development activities in the innovative use of ICT

    to support education and research.

    20

    Section ve: Information on evaluation

  • 8/14/2019 The SO WHAT Factor Impact Evaluation Strategies for Teacher Educators

    21/30

  • 8/14/2019 The SO WHAT Factor Impact Evaluation Strategies for Teacher Educators

    22/30

    Overview: The program logic model is dened as a picture of how your

    organisation does its work the theory and assumptions underlying the programme.

    A program logic model links outcomes (both short- and long-term) with programme

    activities/processes and the theoretical assumptions/principles of the program.

    The W.K. Kellogg Foundation logic model development guide, a companion

    publication to the evaluation handbook, focuses on the development and use

    of the program logic model. We have found the logic model and its processes

    facilitate thinking, planning, and communications about program objectives and

    actual accomplishments.

    The Kellogg Foundation also has a useful evaluation toolkit found at

    www.wkkf.org/Default.aspx?tabid=90&CID=281&ItemID=2810002&NID=2820

    002&LanguageID=0

    n The National Science Foundation (2002), the 2002 user-friendly handbook forproject evaluation

    Link:www.nsf.gov/pubs/2002/nsf02057/nsf02057_1.pdf

    Scope: A clear guide to setting out an evaluation framework for a project.

    Although based on science projects there are approaches that are applicable to

    the use of technology.

    Overview: The handbook discusses quantitative and qualitative evaluation methods,

    suggesting ways in which they can be used as complements in an evaluation strategy.

    As a result of reading this handbook, it is expected that program managers will

    increase their understanding of the evaluation process and NSFs requirements for

    evaluation, as well as gain knowledge that will help them to communicate withevaluators and manage the actual evaluation.

    n Online evaluation resource library (OERL)Link: http://oerl.sri.com/

    Scope: A collection of a range of resources for people seeking information on

    evaluation.

    Overview: OERLs mission is to support the continuous improvement of project

    evaluations. Sound evaluations are critical to determining project effectiveness.

    To this end, OERL provides:

    a large collection of sound plans, reports, and instruments from past and currentproject evaluations in several content areas, and guidelines for how to improve

    evaluation practice using the website resources.

    22

    Section ve: Information on evaluation

  • 8/14/2019 The SO WHAT Factor Impact Evaluation Strategies for Teacher Educators

    23/30

    OERLs resources include instruments, plans and reports from evaluations that have

    proved to be sound and representative of current evaluation practices. OERL also

    includes professional development modules that can be used to better understand and

    utilise the materials made available.

    n The route mapLink: http://publications.dcsf.gov.uk/default.aspx?PageFunction=productdetails

    &PageMode=publications&ProductId=RR659&

    Scope: These materials are intended for use by CPD leaders/coordinators,

    participants and providers, departments, teams, schools and LEAs.

    Overview: They are an edited version of materials produced as part of a

    two-year, DfES-funded research project undertaken by the Universities of

    Warwick and Nottingham.

    Appendix 8 of the report Evaluating the impact of continuing professional

    development in schools sets out a model for evaluating the impact of CPD

    in schools. It offers a series of steps to follow and questions to ask.

    23

    Section ve: Information on evaluation

  • 8/14/2019 The SO WHAT Factor Impact Evaluation Strategies for Teacher Educators

    24/30

    This section gives a worked example of an evaluation model, the logical model

    framework, applied to an ICT in ITT project.

    The example is based on a small-scale technology development in an

    employment-based initial teacher training (EBITT) programme making up

    the Dorset Teacher Education Partnership (DTEP).

    Title of the evaluation Evaluation of the use of a virtual learning environment (VLE):

    improving reective practice and self-assessment of progress against the

    QTS standards and supporting practice.

    The information for the evaluation is drawn from a short video of individuals talking

    about the use of the VLE in their practice CeDARE (2009b).

    See video case study www.tda.gov.uk/techforteaching

    Evaluation factors

    Any evaluation should consider the ve principles of evaluation:

    identify the focus and purpose of evaluation

    build on what is already known

    gather evidence

    analyse and interpret

    communicate and feed back.

    (See section 2 for more detail)

    The use of the logic frame model ensures that these principles are adhered to as it

    encourages participants to clearly think about:a. input what is invested in the project

    b. outputs what is done as part of the project

    c. outcomes impact: what results are achieved in the project.

    The logic framework model offers both a vehicle for planning and a framework

    for evaluation.

    A logic model helps us match evaluation to the actual program so that we measure

    what is appropriate and relevant Taylor-Powell E and Henert E (2008 , p1).

    In its simplest form the logic frame is made up of three elements which logically linkactivities and effects.

    Section six.An example

    of the use ofan evaluation

    framework

    Input

    Programme

    investmentsActivities Participation Short Medium

    Long-

    term

    Outputs Outcomes impact

    Diagram 1.

    A simple logic framework model

    24

    Section six: An example of the use of an evaluation framework

  • 8/14/2019 The SO WHAT Factor Impact Evaluation Strategies for Teacher Educators

    25/30

    To use this model the rst step is to complete a ow model from need to nal

    impact. When this is completed it will:

    provide a plan for future evaluation

    identify the outcomes that should be measured, and

    provide a guide as to the evaluation tools to be used.

    To draw up a simple logic frame model of the DTEP project you will need to

    review the video and refer to the model below and the guidelines available at

    www.uwex.edu/ces/lmcourse/

    Ideally, the logical model should be drawn up at the beginning of a project and should

    involve all stakeholders. This will identify the focus and purpose of the evaluation

    from its outset.

    Try to complete the model step by step using a blank ow chart (a more detailed

    teach-yourself guide may be found atwww.uwex.edu/ces/pdande/evaluation/evallogicmodel.html)

    You need to:

    identify why the project was set up situation and priorities

    note what resources are anticipated for the project input

    identify the activities to be carried out by the project and who will participate

    in them outputs, and

    state the proposed results from the project at short-, medium- and long-term

    time-scales outcomes: impacts.

    We have detailed below a completed LFM for the DTEP project as an example.We have also provided the evaluation outcomes from the project to illustrate

    how a logical model would have helped with both planning and evaluation.

    25

    Section six: An example of the use of an evaluation framework

  • 8/14/2019 The SO WHAT Factor Impact Evaluation Strategies for Teacher Educators

    26/30

  • 8/14/2019 The SO WHAT Factor Impact Evaluation Strategies for Teacher Educators

    27/30

    Findings

    The ndings of the evaluation from the video were as follows.

    Scope of the implementation

    Planning of the project was limited in its scope. There were a number ofassumptions made based on little evidence.

    The training provided for the trainees and other staff was very limited and

    unsupported post-training.

    The VLE has meant that the Wheelbarrow of paper is no longer needed.

    Trainees found the VLE very useful not only in storing their evidence but also in

    developing other areas of their work.

    Still uncertain if Ofsted will accept assessment of trainees via a VLE.

    Depth of engagement

    For some trainees the use of the VLE changed the way they worked and developedtheir reective practice.

    New communities of practice were established by trainees.

    Few mentors changed their practice.

    During the project a number of other issues were identied and now need to

    be developed.

    The future potential of the VLE has been recognised by trainees and managers

    involved in the project.

    Transfer of ownership

    Some trainees were developing their own communities of practice with otherusers of the VLE.

    Some trainees were changing their practice as a result of having the availability

    and resources within the VLE.

    The partnership has recognised the potential of a VLE to develop, change and

    improve practice for more than just trainees.

    Recommendations

    The project has met the aims of the project but has also highlighted the limitations

    of outlook of those original aims. The project leaders now need to:

    involve more staff in the use of the VLE

    develop future training events to meet the needs of other groups of staff in using

    the VLE

    involve Ofsted in the discussions of their future developments, and

    monitor the impact of the use of the VLE on teaching and learning for trainees

    and other staff.

    Comment

    For those people who have watched the video and followed the steps in the model

    it is anticipated that similar recommendations would be suggested. The LogicFramework Model should enable a straightforward evaluation of any project.

    This example has used a limited range of activities but the model has the potential

    to be used in either a simple or multifaceted project.

    27

    Section six: An example of the use of an evaluation framework

  • 8/14/2019 The SO WHAT Factor Impact Evaluation Strategies for Teacher Educators

    28/30

    Bennett, J, 2003. Evaluation Methods in Research, London: Continuum

    Bickman, L, 1987. The Functions of Program Theory. In L Bickman (Ed)

    CeDARE, 2009a, ICT in ITT Survey, Final Report, Wolverhampton: Universityof Wolverhampton

    CeDARE, 2009b, Teacher Trainees: Virtual Learning Environments (VLEs) in Learningand Teaching, DTEP Case-Study, Wolverhampton: University of Wolverhampton

    Chapman, R W C, 2006. From Coordinated to Innovative: Investigating ChangeManagement in the Use of Electronic Learning Technologies within a Large FurtherEducation College. Unpublished MA dissertation, University Of Wolverhampton

    Chelimsky, E, 1996. Thoughts for a New Evaluation Society. Keynote speech atUK Evaluation Society Conference, London 19-20 September

    Davis, N, Preston, C, and Sahn, I, 2009. ICT Teacher Training: Evidence for Multi-LevelEvaluation from a National Initiative, British Journal of Education Technology, vol 40,

    no 1, pp135148Earle, L, 2003. Lost in the Matrix: The Logframe and The Local Picture, Paper forINTRACs 5th Evaluation Conference: Measurement, Management and Accountability?,31 March 4 April, The Netherlands

    Easterby-Smith, M, 1986. Evaluation of Management Education, Training andDevelopment, Hants, UK: Gower

    Fisher, T, Higgins, C, and Loveless, A. Teachers Learning with Digital Technologies:A Review of Research and Projects, Report 14, Futurelab Series

    Glenaffric Ltd, 2007. Six steps to effective evaluation: a handbook for programmeand project managers, http://www.jisc.ac.uk/media/documents/programmes/digitisation/SixStepsHandbook.pdf accessed 3 May 2009

    Goodall, J, Day, C, Harris, A, and Lindsay, G, 2005. Evaluating the Impact of ContinuingProfessional Development, DFES RB659, London: DFES

    Guskey, T R, 1998. Evaluation Must Become an Integral Part of Staff Development,Journal of Staff Development, vol 19, no 4

    Guskey, T R, 2000. Evaluating professional development, Thousands Oaks,OH: Corwin Press

    Guskey, T R, 2001. JSD Forum: The Backward Approach, Journal of Staff Development,22(3), 60

    Guskey, T R, 2002. The Age of Our Accountability, Course Outline, Universityof Kentucky

    Hooper, S, and Rieber, L P, 1995. Teaching with Technology. In Ornstein, A C (Ed)

    Teaching Theory into Practice, pp154170, Needham Heights, MA: Allyn and Bacon

    Kirkpatrick, D L, 1959.Techniques for Evaluating Programmes. In Journal of theAmerican Society of Training Directors, vol 13, no 11, pp39

    Nevo, D, 2006. Evaluation in Education. In Shaw, I F, Greene, J C, Mark, M M (Ed)

    The Sage Handbook of Evaluation, pp451460, London: Sage Publications Ltd

    Patton, M, 1997. Utilisation-Focused Evaluation, 3rd Edition, Thousand Oaks,CA: Sage Publications

    28

    References

    References

  • 8/14/2019 The SO WHAT Factor Impact Evaluation Strategies for Teacher Educators

    29/30

    RCUK, 2008. An Introduction to Evaluation,http://www.rcuk.ac.uk/aboutrcuk/publications/corporate/evaluationguide.htm accessed 11 March 2009

    Scriven, M, 1967. The Methodology of Evaluation. In R E Stake (Ed) AERA Monograph

    Series on Curriculum Evaluation No. 1. Chicago: Rand McNallyStern, E, 1990. The Evaluation of Policy and the Politics of Evaluation, in The TavistockInstitute of Human Relations Annual Review

    Shulman, L S, and Shulman, J H, 2004. How and What Teachers Learn: A ShiftingPerspective, Journal of Curriculum Studies, vol 36, N 2, pp257-271

    Sundra, D L, Scherer, J, Anderson, L A, 2003. A Guide to Logic Model Development forCDCs Prevention Research Centre, Centre for Disease Control and Preventionwww.ojp.usdoj.gov/BJA/evaluation/guide/documents/cdc-logic-model-development.pdf accessed 12 March 2009

    Tamkin, P, Yarnall, J, Kerrin, M, 2002. Kirkpatrick and Beyond: A Review of Models ofTraining Evaluation, Report 392, London: Institute of Employment Studies

    Taylor-Powell, E, and Henert, E, 2008. Developing a Logic Model: Teaching and TrainingGuide, Madison, WI: University of Wisconsin-Extension, Cooperative Extension,Program Development and Evaluation www.uwex.edu/ces/pdande accessed12 March 2009

    UK Evaluation Society www.evaluation.org.uk/resources/glossary accessed5 March 2009

    University of WisconsinExtension, 2008. Logic Modelhttp://www.uwex.edu/ces/pdande/evaluation/evallogicmodel.html accessed12 March 2009

    Wholey, J, 1983. Evaluation and Effective Public Management. Boston: Little, Brown

    Wholey, J, 1987. Evaluability Assessment: Developing Program TheoryIn Bickman, L (Ed)

    Using Program Theory in Evaluation: New Directions for Program Evaluation,33, 5-18. San Francisco, CA, Jossey-Bass Publishers

    29

    References

  • 8/14/2019 The SO WHAT Factor Impact Evaluation Strategies for Teacher Educators

    30/30

    The TDA is committed to providing accessible information.

    To request this item in another language or format, contact

    TDA corporate communications at the address below or

    e-mail: [email protected]

    Please tell us what you require and we will consider with you howto meet your needs.

    Training and Development Agency for Schools

    City Tower, Piccadilly Plaza, Manchester M1 4TDTDA switchboard: t 0870 4960 123

    Publications:t 0845 6060 323 e [email protected]

    www.tda.gov.uk

    TDA 2009