51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial...

52
DOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development Agency, London (England). ISSN ISSN-1361-9977 PUB DATE 1998-00-00 NOTE 51p. AVAILABLE FROM Further Education Development Agency, Publications Dept., Coombe Lodge, Blagdon, Bristol BS40 7RG, United Kingdom (7.50 pounds). PUB TYPE Collected Works Serials (022) Guides Non-Classroom (055) JOURNAL CIT FE Matters; v2 n7 1998 EDRS PRICE MF01/PC03 Plus Postage. DESCRIPTORS Adult Programs; *Adult Students; Diagnostic Tests; *Educational Needs; *Educational Practices; *Evaluation Methods; Foreign Countries; National Standards; *Needs Assessment; Objective Tests; Postsecondary Education; *Student Evaluation; Technical Institutes; Test Construction; Vocational Education IDENTIFIERS *United Kingdom ABSTRACT This document, which is intended fnr nurriculum managers, student services managc.:..s, and learning support managers in further education (FE) colleges throughout the United Kingdom, presents the practical experiences of FE colleges that have implemented a range of approaches to initial assessment and offers technical information, good practice criteria, guidance, and advice regarding developing and using initial assessment materials. The following are among the topics discussed: state of initial assessment in FE colleges; approaches to initial assessment (understanding the rationale for initial assessment, managing initial assessment in FE colleges, choosing and/or devising assessment materials, timing initial assessment, grading and feedback); technical aspects of initial assessment (assessment and testing, assessment policy, definition of initial assessment, assessment for placement versus classification, selection interviews, screening and diagnosis, rationale for diagnostic testing, and use of screening tests and diagnostic assessments); tools and techniques (quality criteria, required resources, key points); advice to colleges (developing an assessment policy; making contracts; understanding options; and deciding whether to test, which tests to use; and whether to use internally developed tests or tests from other colleges). Appended are the following: draft code of good practice testing; guidelines for defining objective tests; and information about designing and developing diagnostic tests. (MN) ******************************************************************************** Reproductions supplied by EDRS are the best that can be made from the original document. ********************************************************************************

Transcript of 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial...

Page 1: 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development

DOCUMENT RESUME

ED 417 346 CE 076 101

AUTHOR Green, Muriel; Bartram, DaveTITLE Initial Assessment to Identify Learning Needs.INSTITUTION Further Education Development Agency, London (England).ISSN ISSN-1361-9977PUB DATE 1998-00-00NOTE 51p.

AVAILABLE FROM Further Education Development Agency, Publications Dept.,Coombe Lodge, Blagdon, Bristol BS40 7RG, United Kingdom(7.50 pounds).

PUB TYPE Collected Works Serials (022) Guides Non-Classroom(055)

JOURNAL CIT FE Matters; v2 n7 1998EDRS PRICE MF01/PC03 Plus Postage.DESCRIPTORS Adult Programs; *Adult Students; Diagnostic Tests;

*Educational Needs; *Educational Practices; *EvaluationMethods; Foreign Countries; National Standards; *NeedsAssessment; Objective Tests; Postsecondary Education;*Student Evaluation; Technical Institutes; TestConstruction; Vocational Education

IDENTIFIERS *United Kingdom

ABSTRACTThis document, which is intended fnr nurriculum managers,

student services managc.:..s, and learning support managers in further education(FE) colleges throughout the United Kingdom, presents the practicalexperiences of FE colleges that have implemented a range of approaches toinitial assessment and offers technical information, good practice criteria,guidance, and advice regarding developing and using initial assessmentmaterials. The following are among the topics discussed: state of initialassessment in FE colleges; approaches to initial assessment (understandingthe rationale for initial assessment, managing initial assessment in FEcolleges, choosing and/or devising assessment materials, timing initialassessment, grading and feedback); technical aspects of initial assessment(assessment and testing, assessment policy, definition of initial assessment,assessment for placement versus classification, selection interviews,screening and diagnosis, rationale for diagnostic testing, and use ofscreening tests and diagnostic assessments); tools and techniques (qualitycriteria, required resources, key points); advice to colleges (developing anassessment policy; making contracts; understanding options; and decidingwhether to test, which tests to use; and whether to use internally developedtests or tests from other colleges). Appended are the following: draft codeof good practice testing; guidelines for defining objective tests; andinformation about designing and developing diagnostic tests. (MN)

********************************************************************************

Reproductions supplied by EDRS are the best that can be madefrom the original document.

********************************************************************************

Page 2: 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development

Further EducationDevelopment Agency

Initial assessment toidentify learning needsMuriel Green and Dave Bartram

1

/."

4,411141111111

U S EPARTMENT OF EDUCATIONOff ic of Educational Research and Improvement

ED ATIONAL RESOURCES INFORMATIONCENTER (ERIC)

This document has been reproduced asreceived from the person or organizationoriginating it

Minor changes have been made toimprove reproduction quality

4E9

2..^PI .10aMir

PERMISSION TO REPRODUCE ANDDISSEMINATE THIS MATERIAL

HAS BEEN GRANTED BY

L PJO

TO THE EDUCATIONAL RESOURCESPoints of view or opinions stated in thisINFORMATION CENTER (ERIC)/ if " document do not necessarily represent..

Page 3: 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development

Further EducationDevelopment Agency

Initial assessment toidentify learning needsMuriel Green and Dave Barham

' c)

Page 4: 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development

Published by the Further Education DevelopmentAgency (FEDA), Dumbarton House, 68 OxfordStreet, London W1N 0DATel: [0171] 436 0020 Fax: [0171] 436 0349

Feedback and orders should be directed to:Publications Department, FEDA,Coombe Lodge, Blagdon, Bristol BS40 7RGTel: [01761] 462 503 Fax: [01761] 463 140

Registered with the Charity Commissioners

Editor: Jennifer Rhys

Designer: Mike Pope

Printed by: Blackmore Limited, Shaftesbury

Cover photograph: Reproduced with kindpermission from GLOSCAT

ISSN: 1361-9977

© 1998 FEDA

All rights reserved. No part of this publication maybe reproduced, stored in a retrieval system, or trans-mitted in any form or by any means, electronic, elec-trical, chemical, optical, photocopying, recording orotherwise, without prior permission of the copyrightowner. except as follows:

FEDA grants the purchaser a non-transferablelicence to use any case studies and evaluation frame-works as follows: (i) they may be photocopied andused as many times as required, within a single site inthe purchasing institution solely; (ii) short excerptsfrom them may be incorporated into any devel-opment paper or review document if the source isacknowledged and the document is not offered forsale; (iii) permission for other uses should be soughtfrom FEDA, Blagdon.

2

4FE matters

ACKNOWLEDGEMENTSFEDA thanks the following colleges that contributedto our work on initial assessment practice.

Arnold and Carlton CollegeBasford Hall CollegeBasingstoke College of TechnologyBilborough Sixth Form CollegeCalderdale CollegeCarmarthenshire CollegeClarendon CollegeGloucestershire College of Arts andTechnologyHartlepool CollegeHigh Pavement Sixth Form CollegeHull CollegeMid Kent College of Higher and FurtherEducationMilton Keynes CollegeNorth Oxfordshire College and School of ArtRotherham College of Arts and TechnologySandwell College of Further and HigherEducationSouth Nottingham CollegeSouthwark CollegeThe People's CollegeThe Sheffield CollegeWest Suffolk College

ABOUT THE AUTHORSMuriel Green has had a range of roles in all sectorsof education. She joined the former FEU from anadviser/inspector role in Leicestershire where shehad been a member of the LEA's assessment team.Her work has focused particularly on learner andlearning support and she is currently managing aQCA-funded project on initial assessment, againworking with Dave Bartram. Her recent publica-tions include Additional support, retention andguidance in urban colleges (1997) and Differentapproaches to learning support (1998).

Dave Bartram is Professor of Psychology and Deanof the Faculty of Science and the Environment at theUniversity of Hull. He is a Chartered OccupationalPsychologist, Fellow of the British PsychologicalSociety, and a Fellow of the Ergonomics Society. Heis the author of over 200 journal articles, technicalreports, books and book chapters, most of whichconcern assessment and assesssment issues, particu-larly in the context of personnel selection. He hasalso been responsible for the design and implemen-tation of a range of assessment software productsand paper-and-pencil tests which are used in anumber of countries.

Vole No 7

Page 5: 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development

VO1 2 No 7

ContentsForeword 4

Summary 5

1 Introduction 7BackgroundInitial assessment in FE colleges

2 Approaches to initial assessment 9Why offer initial assessment?Managing initial assessment in FE collegesChoosing and/or devising assessmentmaterialsTiming initial assessmentAdministering initial assessmentMarking and feedback

3 Technical aspects of initial assessment 25Assessment and testingAssessment policyInitial assessment: clarification anddefinitionsSelection: placement or classificationThe selection interviewScreening and diagnosisWhy do we need diagnostic testing?Using screening tests and diagnosticassessments

4 Tools and techniquesQuality criteriaWhat resources are required?Key points

5 Advice to colleges: dos and don'tsAssessment policyMaking a contractWhat are the options?To test or not to testWhich tests to useDo-it-yourself?

32

38

6 Conclusions 39

Appendices1 Draft code of good practice testing 402 Defining objective tests 413 How are diagnostic tests designed and 43

developed?

References 46

FE matters 5 3

Page 6: 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development

4

ForewordThis book focuses on initial assessment practice infurther education colleges and has been written forcurriculum managers, student services managers andlearning support managers. It may also be of use toschools, careers services, training providers andhigher education.

Using the outcomes of two related FEDA initialassessment projects, it presents colleges' practicalapproaches to initial assessment and offers technicalinformation, good practice criteria, guidance andadvice based on an evaluation of college materials.

I am grateful to all members of the earlier project'steam, Sally Faraday, Pho Kypri and AnnaReisenberger, who worked with me to supportproject colleges in this important area of work. Thecommitment, energy and enthusiasm of the collegesthemselves must be acknowledged as must the pro-fessionalism of those college representatives whohave already done so much to disseminate theirexperience via FEDA conferences.

I am particularly grateful to the colleges which werebrave enough, in our most recent project, to submitexamples of their 'home-grown' materials, to bescrutinised and evaluated by our expert consultant,David Bartram. Dave Bartram is a CharteredOccupational Psychologist and Professor ofPsychology at Hull University, with an internationalreputation for his work on assessment. I have verymuch appreciated his expertise and support in recentand current development work. I hope that readersfind his contribution to this report, as co-author, asinformative and useful as we have.

Muriel GreenFEDA education staff

FE matters6

VO1 2 No 7

Page 7: 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development

SummaryBackground

This book focuses on initial assessment practice infurther education (FE) colleges. It has been writtenfor curriculum managers, student services managersand learning support managers. It may also be of useto schools, careers services, training providers andhigher education.

Using the outcomes of two related FEDA initialassessment projects (Initial Assessment and LearningSupport) it presents the practical experiences of col-leges which implemented a range of approaches toinitial assessment and offers technical information,good practice criteria, guidance and advice based onan evaluation of the initial assessment materialsdeveloped in selected colleges.

Best practice

Best practice in the management and implementationof initial assessment includes:

a common and shared understanding of thepurpose of initial assessmenta clear management strategy which reflectsthe college mission and purposean assessment policy and code of practicea transparent management structure whichidentifies roles and responsibilities, clearlines of communication and accountability.

In reviewing the management and implementation ofinitial assessment, colleges will need to reflect on thepurposes of initial assessment and on the extent towhich existing policies, structures, roles and respon-sibilities will help the college achieve in relation toeach of the identified purposes of initial assessment.

Purpose and timing

The purpose and timing of assessment will need to beconsidered so that:

assessment which seeks to aid placementtakes place pre-entryassessment which seeks to identify skilllevels and support takes place at entry

VOI 2 No 7

assessment which seeks to identify thespecific nature of support needs is anintegral part of the induction process forthose students who have been identified byscreening as needing support.

Administering initial assessmentColleges need to:

develop clear implementation strategieswhich relate to the purpose of initialassessment, in line with the college'sassessment policyclarify and confirm the roles andresponsibilities of all staff involved in theprocessestablish clear and simple systems formanaging the process of initial assessmentoffer training, guidance and support to staffto ensure they feel comfortable andconfident in their rolescommunicate clearly to students thepurposes and processes of initialassessment.

College materials

FEDA has collected examples of home-grown diag-nostic assessment tools developed by cutting edgecolleges, including a few which sell their assessmentmaterials to other colleges. Problems with college-developed materials included:

setting cut-off scores for classifying peopleinto different levels of attainment withoutany empirical justification for the locationof the cut-off pointsinappropriate diagnostic interpretation ofresponses to individual items or smallsubsets of items on tests designed asscreening testslack of evidence of reliability or validity orother supporting technical documentationapplication of inappropriate `summative'educational assessment approaches todiagnostic assessment.

FE matters 5

Page 8: 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development

The publicationThis publication presents good practice criteria andguidance to support colleges in producing more rig-orous and robust assessment instruments.

In choosing or devising initial assessment materialscolleges need to:

be clear about the purpose for which testswill be usedidentify the skill demands of particularprogrammes and ensure that initialassessment materials relate to themensure that specialist learning support staffcan be available to advise, guide or workwith curriculum teamsclarify roles and responsibilities in line withcollege policy and a code of practice.

Quality control in the use of objective assessmentdepends on the combination of robust, relevantinstruments and competent users. Six main areas ofquality control criteria are highlighted: scope, relia-bility, validity, fairness, acceptability and practi-cality. To judge the overall cost-effectiveness of usingany assessment method, one should evaluate itagainst all six of these criteria.

Key messages

Accurate diagnosis is not the same as effective`treatment' but it does provide the informationneeded to target support resources more efficiently.

Overall efficiency requires accurate diagnosiscombined with appropriate placement and goodlearning support.

Poor diagnosis is costly to develop and can lead toa mis-direction and waste of support effort.

Poor diagnosis can end up costing you more thanno diagnosis.

6 FE matters

6Vole No 7

Page 9: 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development

IntroductionBACKGROUND

FEFC's report, Measuring achievement (1997) pro-vides interesting data on student continuation andachievement rates. The continuation rate for themedian college in 1994-5 was 88% withachievement at 71%. The average collegeachievement rates varied by college type, from 82%in specialist colleges to 63% in general FE and ter-tiary. Range of achievement was much wider thancontinuation rates a quarter of FE colleges hadachievement rates of less than 51%.

Colleges are thus keen to address problems ofretention and achievement and are giving a high pri-ority to effective learning and teaching. They areattaching great importance to the need to make earlyand informed judgements about learners' experienceand skills in order to guide them towards and selectthem for the most appropriate programme. They arealso keen to ensure that learners are able to managetheir work, make personal and academic progressand achieve their learning goals.

The report from the FEFC's Learning Difficultiesand/or Disabilities Committee led by ProfessorTomlinson, Inclusive Learning (1996), promotes anapproach to learning 'which we would want to seeeverywhere':

At the heart of our thinking lies the idea ofmatch or fit between how the learner learnsbest, what they need and want to learn, andwhat is required from the sector, a college andteachers for successful learning to take place.

(Tomlinson, 1996 2.3)

Early and effective assessment of students' require-ments is critical to the concept of inclusive learning.Chapter 5 of Inclusive Learning sets out how stu-dents' requirements will need to be assessed in orderto ensure inclusive learning. While acknowledgingthe desirability of inclusive learning and the par-ticular assessment requirements of individuals withlearning difficulties and disabilities, this bookfocuses on the whole-college approaches to initialassessment which were going on at the same time asthe FEFC's Committee was examining educationalprovision for those with learning difficulties and/ordisabilities. In sharing information on examples of

Vol 2 No 7

initial assessment practice, from which we have dis-tilled general guidance and good practice criteria,FEDA seeks to contribute to improved initialassessment processes and to the development ofmore 'inclusive' colleges.

It is important to signal at this early stage that initialassessment on its own will do little to ensure effectivelearning. It is the way in which learners, teachers andinstitutions are able to use the information generatedthat is critical to the learners' success. FEDA's publi-cation, Different approaches to learning support(Green, 1998) disseminates good practice and offersadvice on effective support systems. Readers seekinginformation, advice and guidance to help takeforward their own provision will find it useful towork with both publications together.

INITIAL ASSESSMENT IN FECOLLEGES

There seems to be some confusion about initialassessment practices. Initial assessment is the firstexperience of a sequence of assessment processeswhich enables an assessor to make judgements aboutthe assessed. Sometimes learners will make judge-ments about themselves, measuring their perfor-mance against shared criteria through aself-assessment process. Assessment in an educa-tional context should support learners and learning.For learners it will:

identify what has been learnedprovide feedbackidentify what still needs to to be learnedenable learners to set targets which ensuresuccess in new learningallow learners to take responsibility forpersonal development.

For lecturers, assessment will:

provide confirmation of what has beenlearned and what still needs to be learnedbe a basis for discussion with students andother staffhelp with evaluation and planning ofprogramme design and delivery.

FE matters 7

Page 10: 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development

Assessment can be both forward-looking andbackward-looking. Backward-looking assessmentmeasures what has been learned or achieved.Forward-looking assessment looks at what may beachieved or learned in the future. Most educationalassessment is backward-looking: it attempts tomeasure what has been achieved and to providesome form of credit for it (in the form of whole orpart qualifications).

Initial assessment has both forward- and backward-looking components. It is concerned both withassessing where the student is now and with makingjudgements about his or her capacity to progressalong one or other of a number of paths. It is thelatter aspect of assessment which creates the greatestdifficulty and demands careful consideration of thetechnical nature of the process.

A later section of this book goes into the technical-ities of assessment in some depth. However, we needto signal briefly at this early stage that initialassessment in colleges is a complex set of processeswhich can:

inform guidancefacilitate selection and placementscreen for levels of literacy and numeracydiagnose programme specific learningneeds.

Screening and diagnostic assessment are different butrelated processes. Page 27 of this publication seeks toclarify the distinction between these two forms ofinitial assessment.

Colleges using initial assessment processes includingscreening and diagnosis, do so for different purposesand use different approaches. The next chapter pre-sents an overview of college practice and makes rec-ommendations which are expanded in later sectionsof the documents.

8 FE matters IVOI 2 No 7

Page 11: 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development

2 Approaches to initial assessmentWHY OFFER INITIALASSESSMENT?

National imperatives to recruit and retain more stu-dents are seen as the major driving forces behindinitial assessment developments. Colleges keen torecruit more, and different, learners recognise thatgrowth inevitably leads to a change in the studentprofile, a possible increase in the numbers of non-traditional students and a need to work harder tosupport students so that they are able to make pos-itive progress toward their stated learning goals.

This section includes information gained from FEDAaction research with a small group of colleges (17),selected to represent the diversity of FE provisionacross the country looking in detail at the differentapproaches to initial assessment.

While colleges recognise the value of initialassessment for its own sake, the FundingMethodology, with additional support units, is

flagged up by project colleges as the most powerfullever for change in initial assessment practice.

The need for evidence to support applications foradditional funding units promotes the need for acoherent system. Some project colleges needed tomove from using a range of different tests across thecollege with no consistency of approach, adminis-tration, marking, interpretation or use of infor-mation to a whole-college approach underpinned bya clear strategy and policy, implemented throughtransparent systems and structures.

In most cases, something which started as small-scaleactivity with its roots in 'special needs', has becomepart of the learner's entitlement and is linked closelywith a desire to offer appropriate support to a widerange of students in a bid to improve retention andachievement.

Where colleges can offer rigorous and robust initialassessment of learners it will be possible also to offerthe individual support to ensure personal and aca-demic progress. Data collected through rigorousinitial assessment processes can provide a baselineagainst which to measure student achievement, tohelp motivate learning and celebrate success.

In short, there is a demonstrable value in making anearly assessment of learners' needs so that difficulties

Vol 2 No 7

do not grow into problems of non-attendance,missed deadlines, lack of progress/achievement and,eventually, drop-out. However, the outcomes ofassessment must be used at different levels.Aggregate data need to inform strategic and cur-riculum planning and the allocation of resources.Information about individuals needs to be used posi-tively to motivate learning, secure support and,where appropriate, additional funding units.

Purposes of initial assessment

It is important that colleges are clear about the pur-poses of initial assessment. Initial assessment can:

guide placementidentify individual needs and informsupport provisioninform strategic planningpromote curriculum development and change

provide evidence for additional funding units

provide baseline data to motivate andprovide a measure for studentachievement.

Readers will need to consider the extent to whichtheir own institution aims to do all of the above andthe extent to which this is commonly understood bystaff at all levels across the organisation.

MANAGING INITIALASSESSMENT IN FE COLLEGESOverall management responsibility for initialassessment is divided equally across project collegeswith 50% managed through client and student ser-vices and 50% through a senior curriculum manager.Although strengths and weaknesses are attributed todifferent approaches there is no evidence that theparticular location of the service within the man-agement structure has affected developments.

Usually the operational management of initialassessment is the responsibility of learning and studysupport managers and their teams. Colleagues with arange of related roles and responsibilities may berepresented in teams which co-ordinate andimplement initial assessment.

FE matters 9

Page 12: 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development

They include:

learning and study centre managerlearning and key skills workshop managersand staffEnglish, Maths and IT stafflibrary staffadult basic education staffliteracy and language support staffSection 11 staffadditional needs managers and staffmanagers and staff working with learnerswith difficulties and disabilities.

Learning support staff rarely implement initialassessment processes on a large scale. One favouredapproach was supporting others in administering testsand/or developing and presenting initial assessmentassignments. There was no consistent approach tomarking, interpreting results and feeding back to stu-dents or others. Learning support staff, mainstreamstaff and tutors were all involved in different stages ofthe process. Outcomes of assessment were also col-lected, recorded and used in different ways.

One college saw initial assessment as the responsibilityof threshold services. The diagnostic assessment officer,part of the threshold services team, serviced programmearea teams so that initial assessment was programme-related, with coherence provided by a common policyand framework. Study Link then took up responsibilityfor support. Both were located in client services.

Overall, there are quite complex management modelswith clearly identified senior managers but potentiallylarge numbers of staff with a wide range of specialismsand experience. Skills can be used to best effect wherethere is overt, real senior management support, a clearwhole institutional strategy and policy and when rolesand responsibilities are clearly described, understoodand supported. A measure of the effectiveness of anymanagement model must be the degree of consistencyin the quality of the learners' experience.

Best practice

Best practice in the management and implemen-tation of initial assessment includes:

a common and shared understanding ofthe purpose(s) of initial assessmenta clear management strategy whichreflects the college mission and purposean assessment policy and code of practice

a transparent management structurewhich identifies roles andresponsibilities, clear lines ofcommunication and accountability.

10

In reviewing the management and implementation ofinitial assessment, colleges will need to reflect on thepurposes of initial assessment and the extent towhich existing policies, structures, roles and respon-sibilities will help the college achieve in relation toeach of the identified purposes of initial assessment.

CHOOSING AND/OR DEVISINGASSESSMENT MATERIALSIn 50% of the colleges which worked with FEDAdecisions about the kinds of assessment materials tobe used and, where appropriate, their development,were made by specialist learning support staff andtheir colleagues in curriculum areas.

One of the most important aspects of thissystem has been the involvement of tutors.Their understanding of the issues, theirknowledge of assessment measures and suitableapproaches have been crucial to the success ofsystems.

In the other 50%, decisions may have been made atsenior management level or by specialist teams ofguidance and admissions or learning support staff.Some colleges did not consider this good practice.

No time was allocated for liaison betweensupport and mainstream staff. In a large multi-site college this led to difficulties: poor admin-istration, inconsistencies in marking.

Regardless of who was involved in decision-makingall but two of FEDA's project colleges opted to usethe Basic Skills Agency screening materials.

Other nationally available materials used to supportinitial assessment practice included:

ACCUPLACER: an adaptive, computer-aidedplacement test developed for use in NorthAmerican Community Colleges; tests maths andlanguage skills at a range of levels (not diag-nostic), chosen for general screening purposes

AEB Achievement Test in Numeracy Level 3: putonto computer and chosen for use with adults

Foundation Skills Assessment: a paper-based lan-guage and numeracy test which can be computermarked with an optical mark reader, chosen foruse with adults

MENO: Thinking Skills Assessments from theUniversity of Cambridge Examinations Syndicate;Literacy, Spatial Operations and Understanding

FE matters VO1 2 No 7

Page 13: 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development

Argument chosen for use with Access students,and Critical Thinking chosen for use with A-levelstudents

NFER-Nelson Basic Skills Numeracy Test: chosenby the learning support team in collaboration withnumeracy tutors across the college, deals with cal-culations, approximations, problems

NFER-Nelson General Ability Tests: chosen topresent a profile of verbal, non-verbal, numericalspatial ability

NFER-Nelson Graduate and Managerial Assessment:a battery of three high-level aptitude tests (verbal,numerical, and abstract reasoning) for use withadults aiming for higher education (HE.)

In all cases these tests were used with a relativelysmall proportion of the student population, oftenadults. In most cases they were introduced becausecolleagues were seeking to test learners on higherlevel programmes using appropriate and acceptablematerials. Those who implemented tests needed tobe trained to administer, mark and interpret results.These responsibilities did not constitute a significantaspect of staff roles and were not always formallyrecognised.

Where colleges decide to use a range of nationallyavailable initial assessment materials or tests, it iscritical that those with responsibility to administertests are working in line with a Code of GoodPractice similar to the draft in Appendix 1.

All colleges were involved in developing their ownmaterials. Sometimes they were for initial screeningbut more often to be used as a final stage of initialassessment during the induction programme. Anearly task in the development of 'home-grown'assessment materials is the involvement of cur-riculum teams in identifying the skill demands oftheir particular programme.

One college developed its own key skills framework(skills fundamental to successful learning, notNCVQ's Key Skills) see the example in Figure 1 (onpage 13). With advice and guidance from specialistlearning support staff, curriculum teams used theframework to analyse the skill demands of pro-grammes before designing their own assessmentmaterials to identify which skill demand could andcould not be met by in-coming students.

Curriculum teams preferred this approach as theysaw initial assessment as part of a continuum. Theyneeded something which was related to the indi-vidual and their chosen programme of study, inte-grated and on-going within the programme, andcritically which would be seen in a positive light by

Vole No 7

students. Curriculum teams were also encouraged touse their analysis of the framework to identify howand where students could be helped to develop skillsthrough:

integration within the subject or unitdiscrete, taught sessionsflexible learninglearning/study support.

Another college (see the example in Figure 2 on page16) used the expertise of its diagnostic assessmentofficer to draw up checklists, from which pro-gramme teams could produce a profile of coursedemands as a basis for developing programme spe-cific screening materials.

Again, programme teams took responsibility for thedevelopment of materials, drawing on the expertiseof specialist staff.

In a third college, an initial assessment and guidancemanager had responsibility for co-ordinating allinitial assessment, including the development ofsubject-specific language materials to help theplacement of students without the formal qualifica-tions identified in course entry criteria. Staff percep-tions were identified as an important issue as, forexample, it became clear that 'a lower standard forfirst language speakers appeared to be used in Artand Design than other courses despite the fact thatthe language level is high as well as wide ranging'.

The Art and Design test was devised particu-larly to see if it might give more exact infor-mation about course-related language skills.For example, the reading text was based on adesign brief and aimed to discover students'abilities to cope with basics: descriptive vocab-ulary and the language of basic design con-cepts. The writing task was primarilydescriptive because an ability to describe visualexperience was identified by Art and Designstaff as a basic criterion for the course.

This serves to reinforce the need to be clear about theskill demands of each learning programme and forspecialist and programme staff to work together todevelop, or support the development of materials.

13FE matters

Page 14: 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development

Figure i An extract from a college-devised skills framework used to profile course demands

Information processing Level i , Level 2 Level 3

Sorting, classifying andorganising information

One set of materialsagainst a few givenheadings

From a variety of sources

Into main and sub

categories

With cross referencingGenerate rules and use to

predict

Drawing conclusions From given informationon a single topic

From a variety of infor-mation under somedirection, e.g. for given

assignmentDraw relevant conclu-sions from new data

From a variety of infor-

mation unaidedShow awareness oflimitations

Summarising Identify main points ofparagraph or shortextract

Summarise wholearticle, talk or longextract

Summarise and synthesiseinformation from a varietyof sources

Argument Explain and justify ownchoices, e.g. ofmaterials or treatment

Identify/expressarguments in supportof a propositionAdduce evidence

Analyse arguments anduse to refute a proposition

Comparing andcontrasting

Compare/contrast twoor three items underdirection

Compare/contrast twoor three items

Compare/contrast anynumber of items

Analysing Break simple materialinto component parts

Break complex materialinto component partsIdentify appropriate otherresponse

Evaluating One topic against a fewgiven criteria

One topic against arange of given criteriaSeveral topics against a

few given criteria

A range of topics againstself-determined criteria

12 FE matters

14Vol 2 No 7

Page 15: 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development

Figure i An extract from a college-devised skills framework used to profile course demands continued

Organisation Level i Level 2 Level 3

Paper management Keep notes and recordssorted by topic ormodule

Set up and maintain afile system with main,sub headings and index

Set up and maintain filesystem which includescross-referencing

Time management Attend on timeMeet deadlines for asequence of tasks

Manage free study timeappropriatelyManage concurrent tasksand meet deadlines

Prioritise concurrenttasksPlan ahead

Revising Undertake revisionwhere necessary

Plan and implement arevision timetable

Self management Co-operate with tutor toidentify strengths/weaknessesAgree short-term goalsReview progress

Plan work to improve onweaknessesWith support: evaluateown performance, setmedium-term goals,review progress

Set own short-, medium-and long-term goalsReview own progressEvaluate ownperformance

Information gathering

Information seeking Use library and flexiblelearning centres withsupportIdentify and use externalsources of informationwith directionDesign and undertakea simple survey

Use library and flexiblelearning centres unaidedIdentify and use externalsources of informationfreelyDesign, pilot, modify,undertake and evaluate asimple research tasks

Note taking Simple notes in pre-setformats, e.g. gappedhandouts

From text or talk withsupport, e.g. on afamiliar topic

From text, video, talk oraudio materialSee also Language:

Extracting key information,detailed understanding,discussion skills,inference

VO1 2 No 7

BEST COPY AVAILABLE15

FE matters 13

Page 16: 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development

Figure i An extract from a college-devised skills framework used to profile course demands continued

Information output Level 1 Level 2 Level 3

Understanding tasks orquestions

Follow simple written orverbal instructions witha memory aid

Analyse componentsof taskMake choices

Follow instructionswithout memory aid

Complete tasks fromoutline instructions onlyIdentify not only contentbut also appropriate tone /treatment

Use appropriate written /oral / graphic formats

Brief written or verbalreportsForms e.g. accidentreportsMultiple choice/shortanswers

Semi formal reportSmall group presentationFormal lettersEssays

Formal / lengthy reports,e.g. extended essay,technical reportIndividual oralpresentation

Planning and drafting Planning stage in some Planning stage in all workFirst draft stage for major

course work assignments

Second draft stage inmajor piece of work

Proof reading Check content foraccuracyProof read for spelling,grammar andpunctuation withsupport

Systematically proofread for spelling,grammar andpunctuationCheck content foraccuracy and tone

Handling tests andexams

Follow instructionscorrectly

Manage time within testor exam

Practise beforehand

Strategies for coping withstrengths and weaknessesSee also Language: reg-ister, tone, style, para-graphing, sentencestructure

14 FE matters

16

VOI 2 No 7

Page 17: 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development

Figure 2 An example of a college-devised checklist idenffying entry requirements

Styles of writing needed Essential Taught Not relevant

What sort of writing are students required to do?

Copy notes

Notes for own use

Supply single word answers

Make short answers on factual matters

Write a descriptive paragraph of factual matter

Descriptive writing with explanatory commentaryon content

Descriptive writing with critical or evaluativecommentary

Persuasive writing

Analytical writing

Imaginative writing

Selection and use of type of language registere.g. informal/formallevel of languagetechnical

Knowledge of particular layouts:letterreportessay

Other

Writing skills checklist

Content:

Writing is relevant to task setSufficient detail has been includedIrrelevant information is excludedIdeas flow logicallyIdeas are clearly expressed

Ideas develop an argument or analysis

Language:

Everyday appropriate vocabularySpecialist vocabulary is usedStyle of language is appropriate

Conventions:

Clear presentationGrammar acceptableSpelling acceptablePunctuation acceptableFormal writing structure is used: introduction,body, conclusion

VOI 2 No 7 FE matters Yd 15

Page 18: 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development

Choosing/devising initial assessment materials

In choosing or devising initial assessment mate-rials, colleges need to:

be clear about the purpose for whichtests will be usedidentify the skill demands of particularprogrammes and ensure that initialassessment materials relate to themensure that specialist learning supportstaff can be available to advise, guide orwork with curriculum teamsclarify roles and responsibilities in linewith college policy and a code ofpractice.

TIMING INITIAL ASSESSMENTThe stage at which initial assessment is introduced tolearners relates to purpose. Where the purpose is toinform judgements about the type or level of pro-gramme, the learner needs to be assessed pre-entry.This may be through an interview which seeks evi-dence of a match between previous experiences andskills and those demanded by the level and kind ofprogramme. Qualifications are often recognised asevidence of previous experience and skills, andwhere students do not have the qualifications statedin the formal entry requirements some colleges havechosen to implement pre-entry testing. Assessment iswider than testing and Chapter 3 seeks to clarifytechnical aspects of both.

Students are only tested on the main college testif they do not have the required minimumGCSE or (G)NVQ or equivalent qualificationsfrom a reputable institution. At present stu-dents should be tested if they do not have oneof the following:

D in GCSE English (E for GNVQ level 2)Communication at GNVQ level 2other nationally recognised equivalent.

In this example, initial assessment is focused on lan-guage skills and the college uses the tests todetermine whether applicants will be able to dealwith the course for which they are applying. Where acollege has an open access policy such testing needssensitivity. It must be made clear to learners that theoutcome of testing will inform the level of pro-gramme which the college believes will best suit thelearner and through which they will be most able toachieve.

16

Some colleges screen for levels of basic skills, pre-entry, to identify the learners most likely to benefitfrom learning support on-programme.

Figure 3 An extract from a college's Initialassessment guidance for tutors

Q Is the screening about getting people onto theright courses?

A The screening exercise is not a screening outtest. It shouldn't be used for pre-entryselection. The exercise tells us nothing aboutthe nature of the particular difficulty anystudent may be facing. It also says nothingabout their potential to develop over theduration of the course. Getting students on tothe right course is more rightly the concern ofthe initial interview and induction.

This exercise is to be used after the student hasstarted on the course. It tells us where a studentstands in broad literacy and numeracy terms inrelation to the literacy and numeracy levelsrequired to complete the course.

However, some students do end up on inappro-priate courses. With sensitivity and discretion thescreening results might form part of the evidencewhich suggests that students should be re-routedonto a more appropriate course be it at a more

or a less advanced level.

If the outcome of testing is likely to inform selection,or to lead to re-routing where a choice has beenmade and a place offered, this needs to be made clearto learners.

Where testing is used pre-entry for a large number oflearners, the information generated can be used atwhole-college level to inform curriculum planningand resourcing as well as support services. However,the benefits of being able to plan early need to bebalanced against the costs of pre-entry testing. Themost significant costs centre around students whodon't enrol after testing some because, regardlessof testing, they choose to go elsewhere and othersperhaps according to college anecdotal evidencebecause they were adversely affected by testing. It isuseful to compare numbers of applications withenrolments. Large scale use of pre-entry assessmentis not cost effective if the conversion rate is not good.

Most colleges did not screen pre-entry, as they didnot want the students' perception of testing topresent a barrier to access.

FE matters VO1 2 No 7

Page 19: 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development

Screening learners at entry is now common practicein most tertiary and general FE colleges. The purposeof screening at this stage is to identify students whoneed support. However, in some colleges studentswho have been accepted onto programmes at par-ticular levels have been re-routed where screeningsuggests that there may be difficulties. Colleges needto be clear in communicating to students the purposeof screening and the possible outcomes. If significantnumbers of students are re-routed after testing it maybe necessary to re-evaluate the college's admissions/selection procedures.

Initial assessment is part of the continuum ofgathering information about students thatbegins with recruitment. It is a post-enrolmentactivity that contributes to the development ofindividual learning programmes. Its place inthe learner pathway is clearly identified inthreshold/entry services.

Although the Basic Skills Agency's (BSA) screeningtests are the most commonly used, several collegeshave developed their own screening and diagnosticassessment materials. Both Basingstoke and SheffieldColleges' materials are bought by other collegesacross the sector. Screening tests used by colleges toassess students at entry are limited in scope, andfocus largely on literacy and numeracy skills. BSAscreening tests and a very limited range of commer-cially produced materials are used widely with stu-dents wishing to follow programmes at level 2 andbelow. Where colleges screen students on higher levelprogrammes, they frequently use screening materialsdeveloped by themselves or other colleges.

Screening at entry identifies an approximate level ofskill. Learners who are more than a level below thatrequired to function on their chosen learning pro-gramme will be targeted for support. Experience sug-gests that students need to be identified andprogrammed for support at the earliest possibleopportunity:

Students in need were arriving for support verylate in their course, so the help we could givethem was less effective than it could be.Typically, they were arriving having alreadyunder-achieved on assignments or unit tests,and tended to be demoralised and negative.

The guidelines prepared for tutors administeringscreening tests in one college signal the importanceof screening early; give a deadline for the return of

VO1 2 No 7

test results; but at the same time offer some flexi-bility to programme teams about exactly whentesting takes place.

Figure 4 An extract from guidance onassessment

Screening should take place as early os possible.It obviously helps us if we can get a basic skillsprofile of our students quickly. It also benefitsthose students needing support to be identifiedas soon as possible. However, the exercise can becarried out at any time during the first threeweeks of the course. This will mean that courseteams can choose the time and setting mostappropriate to their students. Regardless of whena course team decides to complete the exercise,we need the results back no later than Friday 30September.

Feedback from adult students on an Access pro-gramme in another college indicates that the MENOThinking Skills assessments were introduced too latein the course:

I feel that the MENO booklets should havebeen given out at the beginning of the term andnot in the middle of assignments.

The outcome of a screening test signals whether astudent may need support but does not provide adetailed profile of what a learner can or cannot do,some colleges have developed diagnostic assessmentmaterials so that further judgements can be madethrough the induction process. Where there is goodpractice, tools will be rigorous and robust and willhave been developed in collaboration by vocationaland specialist staff, in the light of a programme spe-cific skills profile. Only students who have beenflagged up as needing support will requireassessment at this stage.

Again, it is important to administer, mark and feedback to students as soon as possible.

Initial assessment and feedback

Initial assessment, together with studentfeedback and negotiating support, should usuallytake place during the first four weeks of a pro-gramme. This helps to avoid confusion with`selection', and also ensures group and individualneeds are identified early. The exact timing willdepend upon the nature and length of a particularprogramme.

FE matters is 17

Page 20: 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development

Unlike the screening tests, which are quick and easyto mark, diagnostic assessments can be very timeconsuming. One college reported three to four hours'marking for one diagnostic assignment for a tutorgroup of 20 students.

Even a sophisticated initial assessment process maymiss students who have support needs. It is importantto remember that assessment is an on-going processand if there is evidence that a student is failing tomake progress, learning support may help.

The outcomes of screening will need to be availableearly enough to identify which students will benefitfrom more detailed assessment through theinduction process. A clearly communicated schedulewhich identifies what will happen and when, withdates and deadlines, will smooth the process ofinitial assessment across the college.

Purpose and timing of assessment

The purpose and timing of assessment need to beconsidered so that:

assessment which seeks to aidplacement takes place pre-entryassessment which seeks to identify skilllevels and support takes place at entryassessment which seeks to identify thespecific nature of support needs is anintegral part of the induction process forthose students who have come throughscreening as needing support

ADMINISTERING INITIALASSESSMENTIn this context administration has been interpretedas introducing the assessment process to students,giving out materials, supervising and, wherenecessary, timing students' engagement with tasksand collecting in papers. Most colleges chose toadminister (but not always mark) the Basic SkillsAgency screening tests through programme teams. Afew colleges chose to assess applicants pre-entry andused specialist guidance or learning support staff.

A critical part of the process is the way in which staffhave been prepared for their role. They need to beclear about the purpose of the initial assessment andto understand the need to communicate this infor-mation to the students. Some colleges offered staffbriefing sessions for programme teams and someproduced written guidelines.

18

Figure 5 An extract from a college's guidance oninitial assessment produced for tutors

Therefore, it is important that we think about howthe exercise is presented to the students and thatwe are clear as to why we are asking them tocomplete it. As such, it is worth letting the stu-dents know that:

there is no pass or fail mark.no one group is being singled out.

All full-time students are being asked to completethe exercise from Foundation to Access and A-level.

the screening exercise will help us planprovision more effectively. In that sense,the exercise is essentially about ourteaching meeting their needsthe results are confidential.

Students need to feel reassured that the outcome oftheir screening tests will be fed back to them andwill be used for their benefit and the benefit of thecollege.

Confidentiality is an issue. Clearly individual resultswill not be shared with other students but in most ifnot all cases they are shared with other staff. In psy-chometric tests, raw results remain confidential tothe tester but key messages or trends may be commu-nicated to other staff, as well as being explained tothe student concerned. Colleges need to be clear incommunicating to students who will have access todata, in what form, and why.

Where colleges developed their own initialassessment materials, programme teams usuallyadministered the range of tests, apart from, in somecolleges, the writing tests. Again staff briefing, guide-lines and support were given a high priority as wasscheduling (see Figure 6 on page 20).

FE matters

20Vol 2 No 7

Page 21: 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development

Figure 6 An extract from a college schedule for managing initial assessment: delivery of assessment

Who delivered When delivered Staffpreparation

Studentpreparation

Timescale Organisation

A Courseleaders toall students

Week I Experience from

previous year

Project teamsupport

Preliminaryexplanation ofrationale forassignment towhole group

Detailed expla-nation withpersonal tutorsin three groups

By end of week 1 Studentsallocated atrandom to threegroups

B Courseleader withcourse leaderfor Advanced

Week 1 Briefing bycourse leader

Project teamsupport

Self-assessmentchecklistcompleted priorto assignmentin class

Student alertedto assignmentfunction asassessing keyskills

By end of week 1 Joint inductionwith advancedstudents (35)groups of 4/5

Completedindependently

Hand-written notword-processed

C Range oflecturingstaff

Week 1 None other thancourse leader

Project teamsupport

Cover sheetexplainingrationale

Course leadergave explanationto whole group

By end of week 1 Students choseown groups

D Courseleader

Week 2 Project teamsupport

All staff briefedto be ready tohelp students

Given specimenworked assign-assign-ment from Business Studiesto look at

General

explanationbut no specificreference toassessment

By end of week 2 Groups of 6

ScreeningScreening tasksbe completed

individually

Vol 2 No 7 FE matters 4119

Page 22: 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development

One college experienced problems where programmeteams were not aware of the total picture; were notsupplied with all the necessary materials; and wereunclear about how the outcomes would be fed backto students and used by the college. Briefings forfurther assessment were planned around the fol-lowing checklist:

The tests should include a reading comprehensionand a piece of continuous writing.

Reading comprehension should be based on apiece of text which students will have to read aspart of their course if it is a course-specific test.

Clear written instructions should be given to can-didates.

There should be a written marking scheme whichindicates:

acceptable levelacceptable level with additional supportskills not good enoughskills too good refer to higher level

There should be criteria underpinning themarking scheme which are made explicit to can-didate (e.g. 'You will be marked on the accuracyof your English, your ability to organise yourmaterial and the level of your vocabulary and sen-tence structure').

A feedback sheet identifying any support require-ments should be given to the student.

There should be a sheet to go to client servicesindicating that a need for additional support hasbeen identified and any possible information andnature of need.

Evaluating early experience of administering pro-gramme-specific assignments through the pro-gramme team's induction process, student feedbackin one college indicated:

students felt comfortable with theassessment process, but were not alwaysclear about the purpose and the link tolearning supportthe materials were not always seen asrelevant to the programme of studystudents were unclear what would bejudged and how performance would bemeasuredover a third of students found the tasks`too easy'.

20

Planned improvements included:

The content and the specific skills assessed shouldbe clearly relevant to the course and this relevanceshould be explained to the students.

The assignment should be demanding enough togive information on students' competence at anumber of levels.

The skills assessed should be, not only basic lit-eracy and numeracy, but also organisation, per-sonal and study skills.

Students should be more closely involved in theprocess of assessment:

they should know that they are beingassessed and what criteria are being usedthey should have detailed individualfeedback on their strengths andweaknessesthey may even be involved in gradingdecisions on their own or peers'performancethey need to be given more informationabout the standards expected of them.

Both staff and students need to understand clearlythe purpose of the initial assessment exercise.Materials need to be seen to be relevant to thepurpose and will be better received if students per-ceive them to be relevant to the chosen area of study.There is evidence that students will perform betterwhen they are clear about what is being measuredand how judgements will be made. Both staff andstudents need to know about time, and any otherconstraints.

Psychometric tests in the project colleges werealways administered by specialist staff in line withcodes of practice. However, other materials boughtin for initial assessment were sometimes introducedby programme teams. For example, the MENOThinking Skills assignments were introduced to stu-dents on an Access course by members of the coreskills team, supported by specialist learning supportstaff, and students were able to complete quiteextensive pieces of work in their own time.

Much care went into the introduction of externallyproduced assessment materials and students wereoften given opportunities to practise before engagingwith a formal assessment piece. Where colleges havespecialist staff administering national tests to small,discrete groups of students, it is important to drawon their experience so that, where appropriate, itinforms the college's overall approach to the imple-mentation of initial assessment.

FE matters Vol 2 No 7

Page 23: 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development

Figure 7 An example of instructions to studentsin a college-devised task to assesswriting skills.

Using the tasksStudents must be told why and how they arebeing screened. For example:

You are to be asked to do some writing to seehow you:

write to the pointexpress yourself clearlyplan/organise your writingpresent your workuse grammar, punctuation and spelling.

Make sure the instructions are clear. For example:

You will have 3o minutes to answer the question.

Spend the first five to ten minutes reading thequestion and working out how you're going toanswer it.You should aim to write about zoo to 300words about a side to a side and a half of A4.Concentrate on getting your ideas across clearly.

To manage the process well, colleges need to be clearabout the roles and responsibilities of all concerned.It may be helpful to start from an ideal student expe-rience and work back from there in making decisionsabout who needs to perform what tasks and howthey need to be briefed and supported.

Administering initial assessment

Colleges need to develop:

clear implementation strategies whichrelate to the purpose of initialassessment, in line with the college'sassessment policyclarify and confirm the roles andresponsibilities of all staff involved in theprocess

establish clear and simple systems formanaging the process of initialassessmentoffer training, guidance and support tostaff to ensure they feel comfortable andconfident in their rolecommunicate clearly to students thepurposes and processes of initialassessment.

Vol 2 No 7

MARKING AND FEEDBACKMarking Basic Skills Agency screening tests and`home-grown' initial assessment materials was some-times done by programme teams and sometimes byspecialist learning support teams. The formerapproach can spread the load and should make itpossible to mark large numbers of tests in a relativelyshort time, to provide immediate feedback to stu-dents and set up support before students becomedemotivated by difficulties.

In fact, some colleges found that marking was notdone immediately, took more time than anticipated,and occasionally had to be redone because of inaccu-racies. One college noted that 'Marking wasextremely variable and must significantly affect theusefulness of the tests.'

Positive outcomes from involving programme teamsin marking initial assessment exercises include:

the raised profile of basic skills and basicskills needs with large numbers of teachingstaffa recognition of the need to changeclassroom practice through more inclusiveapproaches to teaching and learning.

These benefits must be worth working for and wheremarking needs to be improved, staff training,guidance and support will improve the accuracy andconsistency across the college. See the example inFigure 9 on page 34 of staff guidance for marking awriting task.

23FE matters 21

Page 24: 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development

Figure 8 A college schedule for marking and feedback from initial assessment

Who marked Marked bywhen

How marked Who gavefeedback

When fedback

How fed back Comments

A Personaltutors butnot owntutorgroup

Tutorialsmarked inweek 1:

delays there-after due tostaff illness

Answer sheetfor Maths

Languagesupport tutordevisedcriteria forwriting

Course leader

adapted intotick list

Took about iominutes perassignment

Personal

tutors totutor groups

In firsttutorial inweek 1

Individualinterview

B CourseLeaders

Week i First trawl'gut reaction'

Usingchecklist

Answersheet forMaths

Course

leadersWeek i Individual

interview aspart of actionplanning

Checklist cuttime inmarking

Some studentsreluctant toaack. need

Setting ass. incourse contexthelped

C CourseLeader

Week 3 Course leader

preparedown Mathsmarkingguide

Language

support tutorpreparedcriteria forwriting

Simplified bycourse leader

for use

Course

leaderWeek 4

Weakeststudentsgiven

feedbackfirst

Individualinterview

Very casual inorder not toprovokenegativeatmosphere

D CourseLeader

Week 5 Initial trawlby 'gutreaction'

Then ownchecklist

Course

leaderWeek 6 Ind. written

feedback onassignmentfront sheet

General verbal

feedback towhole groupInvitation todiscuss ifwanted

Students whoused graphicspackageavoidedhaving to dothe mathscalculations

No record

of poorperformers

22 FE matters 24 Vol 2 No 7

Page 25: 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development

Figure 9 An example of staff guidance on marking a writing task

The aim is to build a profile of the skills and knowledge a student has demonstrated. The skills usuallyfall into three categories:

content, e.g. relevant use of languageconventions, e.g. punctuation, spelling, grammarpresentation, e.g. handwriting or layout.

If the writing task is short, these can be assessed in one reading. More complex tasks may require tworeadings, one for content and then a check on conventions.

Depending on the level of the programme, the levels of language, spelling and punctuation will vary. It isimpossible to give precise guidelines but the table below offers some guidance to the levels which shouldbe expected at entry.

Level Language Grammar andstructure

Punctuation Spelling

i Foundation Simple Sentences Capital letters Most everydayfoundation words

2 Intermediate Simple, clear Sentences Full stops, capitalletters

Everyday words

3 Advanced Basic adult level Sentences,paragraphs

Full stops,commas, capitalletters

All except unusualwords

4 Wide ranging Introduction, mainbody, conclusion

All commonincluding apos-trophes, colons

AU words

Within each of those categories a margin of error may be acceptable e.g. two or three minor errorsmay be acceptable.

The assessments can be recorded on the screening grids which provide individual and group profiles.A tick or a cross is made under each of the skills/knowledge headed columns against students' names.In practice there are four assessments possible:

student's performance at an acceptable standardstudent's performance not at an acceptable standardconflicting or uncertain evidence further monitoring and discussion neededno evidence available, further monitoring and discussion needed.

Individual student profiles across the range of entry criteria are read across the page and group needsfor each criteria can be read down the page.

Students with crosses in the contents columns who cannot write relevant ideas or who are difficult to under-stand, should be referred for individual support. The conventions of spelling, grammar, punctuation and pre-sentation may result in a referral if the problem is severe and if these are individual and not group needs.

Vol 2 No 7 FE matters2 5BEST COPY AVAILABLE

23

Page 26: 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development

Where programme teams are able to work with theirstudents to identify their difficulties early they aremore likely to want to share and 'own' problems andto help provide solutions.

Some colleges introduced quite complex models formarking student work produced through an initialassessment process, and feeding back the outcomesto learners and to appropriate staff. Simple, trans-parent systems work best and colleges keen toimprove their practice will do well to remember thepurpose of the exercise is to target appropriatesupport at students who need it, before difficultiesprecipitate themselves as problems of non-attend-ance and drop-out.

Where programme teams find a significant pro-portion of new intake are assessed as needingsupport, there may be an argument for re-examiningand re-defining entry criteria. Learning support cannot compensate for a poor match between the expe-rience and skills of the learner and the demands ofthe learning programme.

All students need to be told how they have beenjudged through the initial assessment process.Feedback needs to be given as soon as possible afterthe assessment process. Colleges which evaluatedstudent perceptions of the initial assessment expe-rience found that 66% of students reported feelingsof low self-esteem after testing, with 70% makingpositive comments about themselves after feedback.

Programme teams need to have access to aggregatedata for groups they teach. They will need access to aclear and simple system for communicating infor-mation about both individual needs and group pro-files to learning support managers.

Information should be used to plan additionalsupport for individuals where necessary and appro-priate as well as to inform strategic planning.

Students and all the staff who teach and supportthem must have access to a simple system whichrecognises, monitors and tracks progress.

24

Marking and feedback

Colleges need to:

ensure consistency in the quality ofmarking of initial assessment tasks andtestshave a clear policy on confidentialityprovide immediate feedback to learnerscommunicate appropriate informationfrom the outcomes of the assessmentprocess to:

tutorsprogramme tutorsmanagers.

FE matters

26VO1 2 No 7

Page 27: 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development

3 Technical aspects of initialassessment

ASSESSMENT AND TESTINGAssessment is not synonymous with testing.Objective tests provide one means of obtaining infor-mation about students in particular about theirpotential for success and their support needs. Whilemost people feel they know the difference between`tests' and general forms of assessment, it is very dif-ficult to define what 'objective tests' are. Anyattempt to provide a precise definition of a 'test' orof 'testing' as a process, is likely to fail as it will tendto exclude some procedures which should beincluded and include others which should beexcluded.

Within the context of FE initial assessment, objectivetests are procedures which are used to make infer-ences about a person's ability to cope with thedemands of various programmes and, by impli-cation, their likely support needs. As such, they areused as forward-looking assessment tools. Tests ofability and aptitude are forms of assessment con-cerned not with assessing what you can do now orwhat you have achieved in the past, but with makinginferences about your potential to achieve in thefuture.

Tests are those assessment methods which:

provide quantitative measures ofperformanceinvolve the drawing of inferences, forexample, about a person's potential to learnor the reasons for them experiencingdifficulties in learning, from samples oftheir behaviourcan have their reliability and validityquantifiedare normally designed to be administeredunder carefully controlled or standardisedconditionsembody systematic scoring protocols.

Any procedures used for 'testing' in the above sense,should be regarded as an 'objective test'. All suchtests should be supported by evidence that theassessment procedures they use are both reliable andvalid for their intended purpose. Evidence should beprovided to support the inferences which may be

VOI 2 No 7

drawn from the scores on the test. This evidenceshould be accessible to the test user and available forindependent scrutiny and evaluation.

Much of what follows will also apply to assessmentprocedures which lie outside the domain of 'tests',interviews, appraisals of records of achievement, andso on. Any assessment process which, if misused,may cause personal harm or distress should be usedcarefully and professionally. Misdiagnosing aperson's learning support needs may not only wastecollege resources, but damage the self-esteem andconfidence of the individual concerned.

Objective tests differ from other forms of assessmentin that they are based on a technology which makesit possible to specify what they are measuring (theirvalidity) and how accurately (their reliability).Typically, educational assessments (such as tests ofcollege attainment, GCSEs and other school exami-nations) do not have these qualities: their validity is amatter of judgement and their reliability oftenunknown or unassessed.

Objective tests assess a broad range of human char-acteristics: personality, motivation, values, interests,ability and achievement. However the presentguidance is only concerned with one subset of thiscomplex area: the assessment of basic or key skillsand the diagnosis of specific learning needs. (Formore detailed information, see Appendix 2.)

While objective tests can be used as a mechanism formaking judgements about learners, it is important toremember that assessment is more than just testingand that other activities can be effective in providingevidence of students' knowledge, skills, attitudes,performance and needs. Indeed, FEFC guidance oncompleting the Additional Support Costs formassumes that evidence for claims will come from arange of assessment activities:

Institutions will use a range of assessmentinstruments and strategies throughout thelearning programme to identify students' addi-tional learning support needs. The assessmentcarried out should be relevant and identify anindividual's need within the context of the cur-riculum followed.

(FEFC 1996-97)

FE matters 27 25

Page 28: 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development

This broader level of thinking needs to be encapsu-lated in the college's assessment policy which needsto be known and understood by staff.

ASSESSMENT POLICYManaging assessment (FEDA 1995) examines thedevelopment of a whole college strategy for consis-tency and quality in the management of assessmentand sets out key features of an assessment policy.Elements to be included in an assessment policy are:

assessment principlesassessment stagesassessment processes.

Key principles of assessment apply to all stages in thelearner pathway and to the vast spectrum of pro-grammes with different assessment regimes. Theseprinciples need to underpin initial assessmentpractice for all learners:

enhancement of learning: a key purpose ofassessment is to ensure learningreliability and validity: all assessmentshould be based upon explicit assessmentobjectivesshared understanding of standards: staff aretrained in assessmentquality assurance: a system is in place tomonitor assessment practice.

(FEDA, 1995)

Consistency of initial assessment practice should beguided by a whole college policy, with studententitlement laid down in the student charter. It may behelpful to consider extracts from one college policy,(see Figure 10) to compare this with your own.

You may want to reflect on your college's initialassessment practice and consider the extent to whichit reflects general principles of fair assessment, theexcerpts above and your own college policy. Suchreflective practice will be instructive as colleges moveto become more self-critical as a means of improvingthe quality of all aspects of college services and pro-vision.

26

Figure io An example of a general assessmentpolicy statement adapted from acollege's student charter

Assessment is an integral part of all learningactivities at X College. It is natural and relevant toall students and enables both the learner and thetutor to identify learning that has taken place,plan the next stages, motivate further learning,and encourage development progress.

Assessment regimes and procedures for the pro-gramme as a whole will be explained, at anappropriate time, to students by members of theircourse or programme team. In addition, studentswill be given information on individualassessment, to include:

what is assessedthe criteria for successwhen to be completedwhen it will be marked and returnedinformation about results and performance.

INITIAL ASSESSMENT:CLARIFICATION ANDDEFINITIONSInitial assessment is the earliest assessment oflearners as they move through the pre-entry, andentry stages. Initial assessment processes need toreflect the spirit of the college's policy statement andthe entitlement set out in the student charter.

In most colleges initial assessment is a staged processand probably starts with pre-entry guidance. Herethe learner and the guidance specialist look togetherat previous experience, 'at learning that has takenplace' with the aim of identifying the most appro-priate choice of learning programme so that personaland academic progress is made.

Judgements made here will be likely to be condi-tioned by the learner's awareness of self and theirability to articulate that self-knowledge along withtheir aspirations. Documentation which the learnermay bring, like a careers action plan, record ofachievement or portfolio of work can all inform dis-cussions and decisions which are made aboutlearning pathways.

FE matters26

Vol 2 No 7

Page 29: 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development

In certain situations potential students may beassessed through the use of tests, for example,aptitude tests, basic skills tests. The purpose oftesting at this stage should be to help the learnerform a more accurate picture of themselves so thatthey are better able to consider information andadvice about learning programmes in relation toindividual needs and aspirations. The purpose oftesting should be communicated to students.

SELECTION: PLACEMENT ORCLASSIFICATION

Selection is a complex process. It involves all thathappens from the time a potential studentapproaches the college, possibly only seeking infor-mation, to when they start their learning pro-gramme. Selection involves two processes, placementand classification. There can be tensions betweenthese as the first focuses on the needs of the studentapplicant, while the second concerns the resources ofthe organisation to which they are applying.

Placement means placing the learner on the pro-gramme best suited to them after considering priorexperience, interests, skills, and future plans, along-side detailed information of programmes.Judgements about learners will be conditioned bytheir awareness of self and evidence of their achieve-ments.

Tests can be used in this context to get a measure ofgeneral aptitude or ability or perhaps to get ameasure of competence in basic skills. In thiscontext, a learner who demonstrated a low level ofnumeracy skills would not be placed on a pro-gramme which required high-level numeracy skills.Effective placement decisions are best made by thosewho do not have a vested interest in recruitment orgrowth targets so that learners can benefit from dis-interested advice.

Classification, on the other hand, involves opti-mising the assignment of applicants to placesavailable on programmes within the college.Classification is a process of finding the best overallfit between applicants and places. In this context,`best' means the most cost-effective in terms oforganisational outcome.

In practice, applicants will select a programmethrough admissions procedures and will be advisedin relation to the demands which it will make onthem. If there is perceived to be a good matchbetween the applicant's interests and capabilities andthe demands of the programme and there are placesavailable, they will be likely to be selected for it.

VO1 2 No 7

FEDA strongly supports placement as good practicein admissions procedures and does not wish topromote the process of classification at the expenseof the student's individual best interests. We do,however, recognise that organisational constraintswill affect the opportunities available.

THE SELECTION INTERVIEWThe assessment process which informs selection isusually in the form of an interview. Interviews will bemost effective when the interviewer is clear aboutboth the key requirements of the programme and thekinds of evidence which can demonstrate that theinterviewee can meet them. Objective tests can con-tribute to that evidence. In particular, they can showwhen applicants have a potential for success which isgreater than their achievements to date wouldindicate.

Earlier FEU work on entry criteria and GNVQs in1995 suggests over-reliance on evidence of priorachievement in admissions interviews, rather thanexploration of future potential.

The following tables provide information on thedimensions of achievement which colleges con-sidered important, and the sources of evidence onwhich they rely. A group of colleges identified thedimensions of achievement which they believed wereimportant to student success in GNVQ programmes.They also identified the sources of evidence whichthey were likely to note as indicators of studentachievement. These indicators and dimensionsformed the basis of a national survey on entry cri-teria and GNVQ. Table A indicates respondents'judgements of the importance of the various dimen-sions. Table B shows the rank order, drawing fromthe percentage of colleges indicating each source ofevidence as being important (colleges could, ofcourse, indicate more than one). Finally, Table C(best indicators for each dimension) shows whichsources of evidence were considered to be the bestindicators of each of the dimensions of achievement.

This illustrates that at the time of the survey therewas a clear dependence on GCSE grade profiles asbest indicators of some of the dimensions likely tolead to success on GNVQ programmes. Performancein GCSE was the source of evidence most used as abasis for matching students to programmes.However, GCSE performance was not seen as a goodenough indicator of learning support need. At thesame time, survey colleges were also introducingscreening and diagnostic assessment to identifylearning support needs.

FE matters 27

Page 30: 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development

Table A Dimensions of achievement identified asimportant by colleges

Dimension %

Academic ability 46

Motivation/interest 25

Ability to communicate 22

Vocational knowledge/skills 18

Potential in vocational programme 16

Organisational ability 13

Relevant work experience 10

Maturity 10

Flexibility in learning 10

Table B Indicators of student achievement inrank order of percentage of collegesidentifying each

Source of evidence %

4-5 GCSEs Grades AC 31

References/reports 27

National Record of Achievement 25

Portfolio of evidence 24

Previous GNVQ achievement 23

4-5 GCSEs Grade DE 22

BTEC First 21

NVQs Level 2 21

GCSE Maths Grade AC 18

4-5 GCSEs Grade below DE 17

GCSE English Language Grade AC 15

Other (e.g. interview performance) 12

Core skills attainment 11

Specific tests 3

Table C Perceived 'best' indicators for each dimension

Dimension Indicator ok

Academic ability 4-5 GCSEs Grades AC 72

Vocational knowledge/skills References/reports 35

Motivation/interest 4-5 GCSEs Grades AC 47

Relevant work experience Portfolio of evidence 22

Ability to communicate 4-5 GCSEs Grades AC 42

Flexibility in learning 4-5 GCSEs Grades AC 22

Organisational ability Portfolio of evidence 25

Potential in vocational programme 4-5 GCSEs Grades AC 30

Maturity 4-5 GCSEs Grades AC / References/reports 19

28 FE matters30

VOI 2 No 7

Page 31: 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development

Initial assessment which informs placementprocesses needs to identify the best possible matchbetween the learner, and the level and kind of pro-gramme to be followed. GCSEs may well constituteevidence of some dimensions of achievement butthere is an argument for seeking a wider range of evi-dence. As a general rule evidence of achievement inthe past is a good indicator of future potential.However, lack of past achievement does not neces-sarily indicate poor future potential. Pastachievement is an outcome of a complex mix of per-sonal and situational factors (ability, opportunity,motivation, the learning environment and so on). Ascolleges are potentially able to provide opportunityand supportive learning environments, those withability who lack a record of prior achievement maysucceed in the future.

SCREENING AND DIAGNOSISThe earlier section of this report confirms that manycolleges use tests for both screening and diagnosis. Itis important to understand the difference betweenthese two related processes. One way of thinkingabout the differences between screening and diag-nosis, is to imagine looking at a scene through acamera with a zoom lens.

With a wide-angle shot, you get a good overallimpression of what is in the scene, what sort of sceneit is but you do not get any of the details.

By zooming in on some particular part of the picture,you get a lot more detail about that part but losesight of the overall picture.

Screening tests give you the wide-angle shots.Diagnostic tests provide a more detailed set of pic-tures. It is a matter of ensuring that the test chosen isbest suited to the purpose for which it is being used.

Another way in which they differ is that diagnostictesting is about individuals, screening is aboutgroups or populations. We 'screen' groups of peopleto identify those who are likely to have some par-ticular quality, strength, or problem. We use diag-nostic tests to identify the nature of an individualperson's qualities, their strengths and weaknesses.

Psychologists use test batteries like the WeschlerAdult Intelligence Scales (WAIS), the Stanford-Binetand the British Ability Scales for in-depth individualdiagnostic testing. As well as giving the wide-angleview of an individual's ability, they can be used tofocus on specific areas and aspects of ability to give amuch finer-grain analysis. These individual diag-nostic batteries require a background of experiencein psychological assessment, and extensive trainingand skill to use properly. In practice, peoplerequiring this level of diagnostic testing should bereferred to a suitably qualified chartered psychol-ogist for assessment.

On the other hand, tests like the NFER-NELSONBasic Skills Tests, or the Psychological Corporation'sFoundation Skills Assessment (FSA), which werereferred to earlier (see page 12) are more widelyavailable to suitably qualified users. These aredesigned for use as group tests providing thebroad, wide-angle view. They are not intended toprovide a very detailed diagnostic breakdown, at theindividual level, of areas of strength and weakness.

The NFER-NELSON Basic Skills Tests measure bothbasic literacy and numeracy skills, and are designedfor use with adults who have few, if any, academicqualifications. The literacy test is based around anewspaper from an imaginary town, while thenumeracy test assesses the ability to carry out simplecalculations, estimation and application of numericalconcepts to everyday problems.

The Psychological Corporation's Foundation SkillsAssessment is also designed to provide measures ofattainment in basic numeracy and literacy skills foradults using materials which relate to everyday situa-tions. The FSA consists of four tests, coveringVocabulary, Reading Comprehension, NumberOperations and Problem-Solving, each available atthree levels of difficulty (A, B, and C). There is ashort screening test available to indicate which levelof the FSA is most appropriate.

There are four possible outcomes from any screeningprocess. Where a test is being used to indicatewhether or not people might need learning support,these can be characterised as follows:

Person does not need support Person does need support

Test outcome 'positive' A false alarm or false positive B hit

Test outcome 'negative' C correct rejection D miss or false negative

Vol 2 No 7

BEST COPY AVAILABLE

FE matters 29

Page 32: 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development

In other words, a screening test will sometimes saythat a person needs support when they do not (a 'falsepositive' result) and sometimes will fail to detect (a`miss') someone who does need support. The numberor proportion of positive outcomes can be varied bychanging the test's cut-off score. Most screening testsfor basic or key skills are designed so that the lowerthe score, the more likely the person is to needsupport. So, the cut-off score is the score below whicha person's result is classed as 'positive', and abovewhich it is classed as 'negative'. If you move the cut-off higher up, you would get more positives; move itfurther down and you get fewer positives.

For any particular cut-off score, the reliability of thetest will determine the consistency with which peopleare classified, and the validity of the test willdetermine the proportion of people whom the testhas correctly classified on any given occasion (i.e.those in cells B and C in the above table as opposedto those in A and D). These two points are veryimportant. If a test is unreliable, then sometimes aperson will 'pass' the cut-off score and on otheroccasions the same person will not. All assessmentshave a margin of error. For objective tests, the size ofthis error is known from the test's reliability. As aresult we can make allowances for the risk of mis-classification due to error in the measurement proce-dures. This is something we cannot do when we useassessment methods of unknown reliability.

Validity is different. It is about what the test mea-sures, not how accurate it is. When a test is not valid,then, though it may be able reliably to classify peopleas falling one side or another of the cut-off score,this classification will have nothing to do with thosepeople's learning support needs.

Typically, screening tests are designed to err on theside of caution, by having the cut-off score set so thatyou tend to generate false positives rather thanmisses. This is because you can always carry outfurther assessment of those with positive outcomesin order to confirm whether they really do have asupport need and, if so, what sort of need it is.

However, once you have classified someone as a`negative' test result, then you have said they are allright. Any needs they might have will have beenmissed.

So, the purpose of a screening test is to 'catch' asmany as possible of those who are likely to needsupport, even if, at the same time, you also catch afew who do not.

Without going any further, screening tests canprovide useful management information. Forexample, in FE, colleges can find out what pro-

30

portion of people are likely to need learning supportand hence identify the level of resource necessary tocope with that. In so doing, it is not necessary toidentify the individual problems each person mightactually have: that can be done later. What isimportant, is that the necessary level of supportresource is provided.

WHY DO WE NEED TO USEDIAGNOSTIC TESTING?Defining individual problems and tailoring supportto individuals requires diagnostic assessment. Thosewho are classified as 'positive' on a literacy screeningtest may have a variety of greater or lesser types ofproblem or support needs. Diagnostic tests can beused to check that they really do have support needs(i.e. that they are not just false positives) and thenature of those needs.

Not only is diagnostic testing expensive in itself, butit has implications for the institution. If you aregoing to spend the time and money needed for diag-nosis, you need the resources to provide the spe-cialised support to deal with the problems youdiagnose. If that support is not available, diagnosingthe fact that someone has a problem is of littlebenefit. It is like going through a series of medicaltests, being told what is wrong with you, and thenbeing told that there is no means of treating you!

USING SCREENING TESTS ANDDIAGNOSTIC ASSESSMENTSScreening is used by most colleges to give an indi-cation of levels of basic skills for large populations ofstudents. Cut-off scores are designed to support theprocess. The cut-off score is the significant factorwhich can vary the outcome of a screening test andmoving a cut-off score up or down obviously affectsthe proportion of a population that will be identifiedthrough the use of a screening test.

It is important therefore that guidance given withscreening tests is followed and that the outcomes oftesting are interpreted in line with the associatedguidance. It is important that tests are administeredin the form in which they are presented and aremarked and interpreted in line with the guidanceprovided. The outcomes of these tests provide usefulinformation which can be used:

at a strategic level to inform decisions aboutresource allocation and curriculum planning

FE matters 32Vol 2 No 7

Page 33: 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development

as evidence of support needs in applicationsfor additional funding unitsto signal the need for further assessment toidentify more specifically the capabilities andneeds of an individual in relation to thelearning demands which will be made of them.

Screening is only effective when the proportion ofpeople you are trying to detect is significant, and thetest you are using is valid and robust. If either no-oneor everyone had the quality you were screening for,the test would be a waste of time. It only makes senseto use a screening test when you need to separate outthose who do have some quality from those who donot. A particular test may work very effectively as ascreening device for an intake population which hasaround a 50% occurrence of people with basic skillproblems, but work less well for one where theoccurrence is either much higher or much lower. Ifthe percentage of people in the intake population iseither very high or very low, then you need a morepowerful test than if the percentage is around 50%.

As a result of this, a particular test could be highlyeffective as a screening device for one college, whichhad an intake with a high proportion of people withbasic skills problems, but ineffective for anothercollege where the intake was from a very differentpopulation.

There are many instances where college staff havedevised tests for themselves. These tests are usuallyprogramme specific and are sometimes used forscreening purposes, sometimes for diagnosing indi-vidual needs. Colleges have come to describe thesehome-grown assessment tools as 'diagnostic'.However, they are often used in the absence of anycode of practice and without clear guidelinesregarding the training needed for those who use andinterpret them. It is important that appropriateguidelines are followed so that the quality and effec-tiveness of the use of these tools can be assured.

FEDA has collected examples of home-grown diag-nostic assessment tools developed by 10 cutting edgecolleges, including a few who sell their assessmentmaterials to other colleges and were identified byrecent national research conducted by NCVQ ashaving a significant market share. All the materials andprocedures associated with their use were examinedand evaluated by the second author (a CharteredOccupational Psychologist with specialist skills intest development and test use). They were found tobe of varying quality. Particular problems included:

setting cut-off scores for classifying people intodifferent levels of attainment without anyempirical justification for the location of the cut-off points

Vol 2 No 7

inappropriate diagnostic interpretation ofresponses to individual items or small subsets ofitems on tests designed as screening tests

home-produced tests lacking any evidence of relia-bility or validity or other supporting technicaldocumentation

application of inappropriate `summative' educa-tional assessment approaches to diagnosticassessment.

To help colleges which choose to invest developmenttime in producing their own programme-specificassessment tools, FEDA has distilled from the eval-uation of existing materials, good practice criteriaand guidance to support colleges in producing morerigorous and robust assessment instruments.

FE matters 31

Page 34: 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development

4 Tools and techniquesQUALITY CRITERIAFor any particular assessment problem, a range ofassessment methods may be available. How do youdecide which to use? The quality criteria described inManaging assessment (FEDA, 1995) can be used asthe basis for defining six important factors:

scopereliability/accuracyvalidity/relevancefairnessacceptabilitypracticality.

While applicable to all types of assessment, these areof particular importance for forward-lookingassessment which is used to make decisions about aperson's future. Issues of equity and fairness becomeparamount in these situations.

Scope: what range of attributes orskills does the test cover, and whatrange of people is it suitable for?Assessment methods vary in both their breadth andtheir specificity. A basic skills screening test may beregarded as 'broad' and 'general'. A battery of testsfor the diagnosis of dyslexia would be broad but spe-cific, while a test of punctuation would be narrowand specific.

Using the zoom-lens analogy mentioned earlier, thecoverage of the test or test battery refers to howmuch of the total picture it covers. If it is a specificdiagnostic battery, it may provide a set of detailedpictures which cover the whole scene or only someparts of it.

In considering the scope of a test, we also need to askwhat populations or groups of people it is intendedfor. Anyone and everyone, or for those without formaleducational qualifications? Is it all right to use it withpeople for whom English is a foreign language?

32

Reliability or accuracy: how precise is it?

What reliance can you place on the score somebodyobtains? If they did the test again tomorrow, or nextweek, would you expect them to get the same scores?For objective tests, this 'precision of measurement' isreferred to as reliability.

Reliability is assessed in two main ways: by mea-suring the consistency of the score and its stability.

We can say that a test provides a consistent measureof an attribute if the accuracy of the responses givento each question in the test is related to the 'amount'of the attribute a person has. A test will be incon-sistent if there are some questions in the test whichrequire skills or attributes other than those which thetest is designed to measure.

A test is 'stable' if a person tends to get about thesame score each time they take the test.

Both consistency and stability are usually expressedas correlation coefficients. These range from zero(meaning you cannot place any reliance on what thetest score tells you) to one (which means that thescore a person obtains is a perfectly accuratemeasure of the amount they possess of the attribute).In general, we expect tests to have reliabilities of atleast 0.70, and up to around 0.90. Values in the low0.80s are considered good.

Reliability is important because it tells us how muchconfidence we can place in a score. Any measure of aperson's performance has a margin of error asso-ciated with it. If you measure someone on a numberof occasions, for example, you will get a variety ofscores. The extent to which these scores vary fromeach other (assuming the person's skill has remainedconstant) is a function of the level of error in themeasurement process. The reliability of a test enablesus to specify that level of error and to define thewidth of the 'band' or region within which we canexpect a person's true score to lie (as opposed to theactual score they obtain on one occasion). The nar-rower or tighter this band, the more reliable the test.

Speed versus power tests

Tests of ability and achievement fall into two maintypes: speed and power tests.

FE matters 3 4 Vol 2 No 7

Page 35: 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development

Power tests are those for which enough time is givenfor most people to attempt all the items, and wherethe main factor determining whether someone getsan item right is the difficulty of the item.

Speed tests, on the other hand, consist of a numberof relatively easy items which have to be attemptedunder a stringent time constraint. As a result, themain factor determining a person's score is howmany items they attempt in the time available, ratherthan how difficult the items are.

It is very important not to mix up these two types oftest. Setting a tight time limit on a power test willrender it invalid, as will relaxing the time limit on aspeed test. In practice, many tests fall somewherebetween these two extremes. Most published testshave been designed for use with time limits, and allthe information provided about them is based onpeople completing them within these limits.

The more a performance is constrained by time, themore difficult it is to get good measures of reliability byusing internal consistency. In general, if tests are speedtests, or are power tests operating with tight timelimits, re-test correlations will provide better estimatesof reliability than measures of internal consistency.

For basic and key skills assessment, power measuresare generally more appropriate than speed ones. Thisdoes not mean that these tests do not have timelimits but that the time limits are intended to be gen-erous enough not to penalise people who are capablebut slow.

Reliability and test length

Assuming you could generate unending numbers ofquestions for a test, all of which related to the sameattribute, then the reliability of your test would be asimple function of how many questions you includedin it. The longer the test, the more reliable; theshorter the test, the less reliable. This is why youshould not use a screening test for diagnosis. As youbreak down the items in the scale into subscales, sothe reliability decreases. An instrument with a

respectably reliable overall scale based on 20 ques-tions would be quite useless as a diagnostic tool ifyou start to look at performance based on subsets offour or five items each.

Reliability is the key to good assessment. You cannothave a 'good' test which is not reliable though youcan have reliable tests which are no good for avariety of other reasons.

VO1 2 No 7

Validity or relevance: does the testmeasure what it claims to measure?Validity concerns questions such as:

Is there evidence to show that the scoresrelate to the qualities which the test wasdesigned to measure?Do the scores enable you to make relevantjudgements about the person, their currentand future performance, their supportneeds, and so on?

If the measures are unreliable, the answer will be`No' to both questions. If the measures are reliable,the answers may be 'Yes', depending on the evidencefor the validity of the test.

It is useful to distinguish between construct validityand criterion-related validity. For a literacy test, forexample, construct validity evidence would be thatwhich supports its general claim to measure literacy.However, the relationship between scores on thescale and, say, future measures of learning supportneed, is evidence of criterion-related validity.

Construct validity is a must: we can argue that wehave a measure of literacy if, for example, we areable to show that it relates to some other measures ofliteracy. On the other hand criterion-related validityis 'optional'. We may have a very well-designedgeneral basic skills test with good construct validity,but find that scores on its scales do not actuallyrelate to any of the learning support or other cri-terion measures which we have. Such an outcomedoes not mean that the instrument is no good onlythat it would have no use in terms of the particularexternal criteria we were considering.

In judging an instrument's worth in terms of cri-terion-related validation studies, therefore, one has toconsider very carefully the relevance and accuracy ofthe criteria used. The questions we need to ask are:

Does this instrument measure the characteristics itclaims to measure?

To what learning criteria might we expect suchcharacteristics to be related?

What empirical evidence is there to show thatthese relationships actually exist?

Evidence of criterion-related validity (or criterionrelevance) comes in a number of forms.

The test could have been given to a large numberof people whose learning support needs were sub-sequently monitored and recorded. Some timelater this information would have been related

FE matters 33

Page 36: 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development

back to their test scores. The relationship betweentheir performance on the test and their subsequentsupport needs would be a measure of the pre-dictive validity of the test. This is the strongestform of evidence for relevance.

A similar idea involves obtaining test scores from agroup of people who may be following a trainingprogramme, and who vary in their current supportneeds. The relationship between their test scoresand the measures of support is called concurrentvalidity (as both are being measured at the sametime). A problem with this approach is that if thelearning support has been effective, then takingboth test and learning support measures at thesame time will give misleading results. The moreeffective the learning support, the less predictivethe test would appear to be.

Other forms of validity are matters of judgementrather than objective 'fact'.

Expert judgement as to the relevance of thecontent of the test items for various different pur-poses is known as content validity. As experts canbe wrong, this should never be taken as con-vincing evidence of relevance on its own.

Face validity is simply what the person taking thetest thinks the test is assessing. Face validity isimportant in establishing a good rapport with thecandidate and ensuring co-operation in theassessment process. However, good face validity isno guarantee that a test is actually either relevantor useful.

Finally, there is faith validity. This concerns thetest user's beliefs in the value of the test generallyin the absence of any evidence to support it. Thisis one of the greatest problems to counter. It isbelief or faith, rather than sound evidence, forexample, which is the basis for the continuedacceptance of techniques like astrology andgraphology in some areas of assessment. There isa very natural tendency to over-interpretassessment data (from all sources); to see patternswhere none exist. The technical information pro-vided with good tests is intended to 'restrain' usersfrom doing this.

Fairness, or freedom from

systematic biasAre the results for different groups of people likely todiffer systematically for reasons which have nothingto do with the relevance of the test? It is importantwhen looking at any instrument to check for infor-mation relating to bias and possible sources of

34

unfairness. Information should be provided aboutdifferences relating to gender and relevant minoritygroups. Ideally, information will also be presented toshow that the individual items have all been

examined for bias by careful examination of theircontent, and by statistical item-bias analysis.

However, it is also important to remember that biasis not the same thing as unfairness. Statistically, biassimply means that there is a systematic tendency formembers of one group to respond differently fromthose of another. Whether or not that is unfair isquite a different question.

Suppose we wanted to use a test to assess the level oflearning support provision needed for a particulartraining course. We use a test on which a score of lessthan 65% correct indicates a need for specificsupport. Suppose also that we find, on average, thatwomen obtain lower scores than men on this test.This would suggest that women were more in needof learning support than men. However, there aretwo possible reasons for this outcome.

This is an accurate reflection of a gender-related dif-ference (in which case the test is a 'fair' reflection ofthe real world even if the real world is not `fair').

There is a systematic bias in the test which results inwomen and men with equivalent learning supportneeds obtaining different scores. If this were the case,the test would over-estimate the number of womenand under-estimate the number of men needinglearning support.

In general, if it can be shown that the differencesbetween any groups of people (either men and women,different ethnic groups, or old and young people)reflect real differences in their performance, then anytest bias is 'fair'. If, on the other hand, the relationshipbetween test scores and reality differs from one groupto another, then the test is 'unfair' if these differencesare not compensated for in some way.

It is very important when planning to use a test tocheck for evidence of bias (are the average raw scoresdifferent for males and females; do they vary withage, ethnic background etc.?). In particular, greatcare should be taken when assessing people with dis-abilities to ensure that their disabilities will notunfairly impact on their test performance.

Numeracy and literacy

Before you use tests on anybody especially tests ofbasic skills you should check that the person hasthe numeracy and literacy skills needed to under-stand what is required to carry out the test. Those

FE matters 36 VO1 2 No 7

Page 37: 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development

for whom English is a second language may have lan-guage problems which interfere with their potentialto respond, for example, to numeracy measures.

The same is true of numeracy difficulties: if a test isdesigned to measure some attribute other thannumeracy, but requires the test taker to be numeratein order to do the test, then it would have an unfairimpact on those with numeracy problems.

Setting a level playing field

It is important to ensure that when a group of peoplesit down to take a test, they have all been providedwith the same information about what they are aboutto do, and have been given the time and opportunityto become familiar with the testing process and thesort of materials they will be dealing with.

To ensure that there has been no inequality of oppor-tunity in terms of fore-knowledge and preparation,most test publishers now produce practice tests orpractice leaflets. These are intended to be given totest takers well in advance of their test session. It isalso good practice to provide them with an oppor-tunity to ask questions about the procedure beforethe test session starts.

Information provided before the test session shouldalso make clear how you intend to use the infor-mation collected, who will have access to it, and forhow long it will be retained. Following a code ofgood practice (see Appendix 1) will help to ensurethat procedures for testing are fair and consistent.

Acceptability: can you expect peopleto co-operate in the assessmentprocedure?

In general people find tests acceptable when:

the reasons for taking the test have beencarefully explained to themthey have been given adequate priorinformation about the nature of theassessment and the opportunity to askquestionsthe administration is properly carried outthey are provided with feedback about theirresults and their implications.

Both faith and face validity, discussed earlier, affectacceptability. Acceptability is important because itaffects the degree of co-operation you can expectfrom the test taker and the rapport you can establishwith him or her in the assessment process.

VO1 2 No 7

Practicality: what does it cost, howlong does it take, what equipment isneeded?

The quality of the information which an assessmentprocedure provides must be weighed against the costof obtaining that information.

Objective tests are highly cost-effective as theyprovide a lot of accurate, relevant information in ashort space of time.

The results of objective tests provide informationwhich it is very difficult to obtain using othermethods.

However, there are costs. These fall into three areas:

the costs associated with becomingcompetent as a test userthe cost of buying or developing testmaterialsthe recurrent costs of using tests (time andmaterials).

You need to be properly trained and qualified if youare to use the tests appropriately and if you are toprovide fair and balanced interpretations of theirresults. Improper use of tests and test results orother less objective methods is not only potentiallydamaging to the individuals concerned, it can also bea waste of resource from the organisation's point ofview and, increasingly these days, carries a risk of lit-igation. It is far less expensive to make sure you arecompetent in the first place, or that you seek advicefrom competent experts.

Any college which is seriously involved in diagnosticassessment should have at least one person on thestaff who understands the principles and basic tech-nicalities of objective measurement. These arespelled out in the 'Level A' test user standards spec-ified by the British Psychological Society. Anyoneintending to develop their own screening or diag-nostic instruments, will need a higher level ofexpertise than this, or should seek the support of anoutside test development specialist.

WHAT RESOURCES AREREQUIRED?

Objective test materials can seem quite expensive. Asa result, it can seem like a good economy to createone's own. However, their price is a reflection of thedevelopment costs associated with producing instru-ments which meet the criteria discussed above.Furthermore, national and international publishers

FE matters 3 g 35

Page 38: 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development

can make economies of scale which are not open toindividual colleges. Making up your own tests is

likely to be a false economy. Copying other people'swithout permission is illegal.

The indirect costs associated with maintaining anassessment resource also need to be considered.These include provision for storage of materials,time, and the cost of developing and implementingan organisational testing policy and carrying outquality control procedures. If you are only likely tocarry out the occasional objective assessment then itis probably more cost effective to use qualified char-tered psychologists as consultants. Where you have aneed for regular use, you need to set the costs (interms of initial training, materials purchase andother set-up costs) against the benefits which willaccrue.

While there is little doubt that good objective tests,properly used and interpreted, are one of the bestsources of information about people's attainmentsand potential, there are costs associated with them:

Becoming a qualified test user involves trainingwhich can be expensive and takes up valuable time.

Establishing a suitable library of test materialsrequires a financial investment (although theactual per candidate cost of testing tends to bequite low).

Developing one's own tests, as described above, isa high-cost option. It also requires areas and levelsof expertise which are likely to lie outside thosepossessed by staff in FE colleges.

Test administration and interpretation take uptime which you may need for other activities.

Record-keeping, monitoring and

follow-upTo maintain quality control over any procedure, youneed to record what happens and follow throughdecisions which have been made to see how effectivethey were. It is thus important to keep records butalso important to ensure that the information theycontain remains confidential.

You should keep a record of which tests you haveused, when you used them and why.

You will also need, with the test taker's permission,to keep a record of test results while they remainyour responsibility.

36

As a rule of thumb, it is a good idea to keep in mindthat any information you might obtain using anobjective test 'belongs' to the test taker. Whateveryou do with it should, therefore, only be done withtheir knowledge and permission.

Actual test scores should only be passed on to peoplewho are qualified to interpret them. This includesyour students. Otherwise, test interpretation in plainstraightforward English is all that should be given toother people. If possible, test scores and generalinformation about the person's age, gender, educa-tional background etc. should be archived for pos-sible future use in test refinement and development,the production of new norms and validation.

However, if you do store test information for suchlong-term purposes then, for the protection of thetest takers, you should be very careful to ensure thatyou do not keep any information which might enableindividual people to be identified.

What are the costs and benefits ofdiagnostic testing?In general the resources needed to do testing well are:

timemoneyexpertiseaccess to relevant populations.

Of these, only the last is readily available within FEcolleges. If the development of testing andassessment procedures results in an overall net effi-ciency gain, then it is worth pursuing. The costs arehigh, up-front, and very apparent. The benefits canbe more difficult to measure and more difficult toattribute directly to the use of diagnostic testing.

In considering cost-benefit trade-offs, we need tolook at the possible outcomes associated with usingtests in this way (both positive and negative), and theconsequences of not using tests.

Bad diagnostic tests result in mis-diagnosis. This islikely to be a more expensive mistake in bothfinancial and human terms than no diagnosis at all.Good diagnosis, on the other hand, can result in theefficient targeting of scarce resources (staff time). Itwill help ensure that students are given the supportthey need and are not de-motivated by either being`helped' inappropriately or unnecessarily, or failingto cope with the demands of their course.

Costs include the design and development costs(start-up costs) and the recurrent costs. Design anddevelopment costs can be high. As the previous

FE matters Vol 2 No 7

Page 39: 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development

section illustrates, developing these instruments isnot a simple process. Costs can be reduced by buyingin existing materials. However, if you do this, youneed to have good technical evidence to support theirquality.

It is necessary to see these costs as part of the overallcosts associated with delivering good learningsupport. As pointed out before, diagnostic testing ispointless unless there is appropriate, competent`treatment' available.

KEY POINTSQuality control in the use of objective assessmentdepends on the combination of a robust, relevantinstrument and a competent user. The user has tobe trained to understand the instrument and touse it appropriately within the limits of its tech-nical characteristics. The instrument needs to befit for its intended purpose.

In making this judgement, six main areas ofquality control criteria have been highlighted:scope, accuracy, relevance, fairness, acceptabilityand practicality. To judge the overall cost-effec-tiveness of using any assessment method, oneshould evaluate it against all six of these criteria.

Accurate diagnosis is not the same as effective`treatment' but it does provide the informationneeded to target support resources more efficiently.

Overall efficiency needs accurate diagnosis, com-bined with appropriate placement and goodlearning support.

Poor diagnosis can be costly to develop and leadto a misdirection and waste of support effort.

Poor diagnosis can end up costing you more thanno diagnosis.

Consider the costs and benefits of alternativestrategies to diagnostic testing, for example,screening with flexible learning support. Weshould always ask the question:

What value would diagnostic testing add to thatobtainable through screening tests on their own?

Vol 2 No 7

BEST COPY AVAILABLE

FE matters 37

Page 40: 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development

5 Advice to colleges: dos and don'tsASSESSMENT POLICYDo develop and adhere to an organisational policyon assessment.

Where initial assessment involves diagnostic testing,the policy should embody the principles representedin the attached draft Code of Good Practice inTesting (see Appendix I) and cover in detail:

test supply and control of materialswhat tests are usedhow they are usedwho is responsiblewhat training and evidence of competenceare required of test userswhat limits are set on their use of testinghow results are to be given to students andother support staffhow data are to be storedwhat provisions are made for monitoringthe quality of testingwhat procedures have been put in place forfollowing up the effectiveness of testing (toassess cost-benefits).

MAKING A CONTRACTThe students you test need to understand why youwant to test them, what the process involves andwhat they will get from it. It is a good idea toestablish a form of 'contract' with them which makesclear, from the start, how your policy is implemented.

WHAT ARE THE OPTIONS?If they agree to take tests, what assurances do theyhave that the tests used will be good ones, and thatthe people doing the testing will be competent?

What will happen to their results and who will haveaccess to them?

What support will be available for them if the testssuggest they need it?

What are their rights and responsibilities in theprocess?

38

TO TEST OR NOT TO TESTDo use screening. Screening is relatively inexpensiveand good tools are available. The incremental ben-efits of diagnostic testing may be relatively smallcompared with the costs.

Do not use tests as the sole basis for diagnosis andguidance.

Do not test people for the sake of it.

WHICH TESTS TO USEDo use tests of known quality.

Do not be fooled by appearances: testing materialsmay be well presented without being rigorous androbust testing instruments.

Do not buy tests on the basis of personal recommen-dations or testimonials. It is the technical evidencewhich you need to consider.

DO-IT-YOURSELF?Do not just 'buy in' from another college in order tosave time and money. Another college may havespent a lot of time developing some tools, but you donot want to be paying for their wasted time if thetools are no good.

Do think long and hard about the costs associatedwith developing your own instruments before youget involved in doing so.

Do seek expert advice. Test design, development andvalidation is a highly technical complex process. Eventhe experts find it difficult to develop good tests.

FE matters40

VO1 2 No 7

Page 41: 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development

6 ConclusionThose working in FE colleges face a dilemma. Itwould appear that there is no easy access to commer-cially produced, good, diagnostic materials atpresent. Yet FE teachers perceive a need for diag-nostic tools to help them tailor support and to advisestudents better on their options. Well-designed diag-nostic tools which will give people the informationthey want, however, are expensive to develop and areunlikely, in practice, to be a cost-effective optiongiven the time and money involved in their devel-opment against the increase in quality of provisionthey may afford.

Many people have produced assessment materials.While some of these may provide reasonable assess-ments of attainment in relevant areas, they are notdeveloped as 'forward-looking' measures. Thedanger lies in the results of such assessments beingused as if they were objective diagnostic tests.

FEDA is doing new work, funded by theQualifications and Curriculum Authority, to developtools for the initial assessment of key skills. DaveBartram and the University of Hull are involved, andgood practice criteria and guidance published in thisreport will, of course, inform the work.

If this guidance has done nothing more than makeclear the complexity and difficulty of the taskinvolved in developing initial assessment procedureswhich meet the quality criteria we have set out, thenit will have succeeded at least in part. Hopefully, itwill also have provided some help to those who wishto ensure that their assessment procedures meet bestpractice, and are effective. Increasing the effec-tiveness of such procedures will ultimately benefit allconcerned: individual students and staff, the college,and the wider community.

VOI 2 No 741

FE matters 39

Page 42: 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development

AppendicesAPPENDIX 1:DRAFT CODE OF GOOD

PRACTICE TESTING

For incorporation into anorganisational policy on assessmentPeople who are responsible for the use of diagnostictests must:

Responsibility for competence

take steps to ensure that they are able tomeet all the standards of competencedefined by the British Psychological Society(BPS) for the relevant Certificate(s) ofCompetence in Occupational Testing, andendeavour, where possible, to develop andenhance their competence as test usersmonitor the limits of their competence inobjective testing and neither offer serviceswhich lie outside their competence norencourage or cause others to do so

Procedures and techniques

use tests only in conjunction with otherassessment methods and only when theiruse can be supported by the availabletechnical informationadminister, score and interpret tests inaccordance with the instructions providedby the test distributor and to the standardsdefined by the British Psychological Societystore test materials securely and ensure thatno unqualified person has access to themkeep test results securely, in a form suitablefor developing norms, validation, andmonitoring for bias

Welfare of test takers

obtain the informed consent of potentialtest takers, making sure that theyunderstand why the tests will be used, whatwill be done with their results and who willbe given access to themensure that all test takers are well informedand well prepared for the test session, andthat all have had access to practice orfamiliarisation materials where appropriate

40

give due consideration to factors such asgender, ethnicity, age and educationalbackground in using and interpreting theresults of testsprovide the test taker and other authorisedpersons with feedback about the results in aform which makes clear the implications ofthe results, is clear and in a styleappropriate to their level of understandingensure test results are stored securely, arenot accessible to unauthorised orunqualified persons and are not used forany purposes other than those agreed withthe test taker.

FE matters 42 Vol 2 No 7

Page 43: 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development

APPENDIX 2:DEFINING OBJECTIVE TESTSObjective tests comprise a series of standardisedtasks (typically, questions to answer, statements tojudge or comment on, problems to solve).

Objective tests differ from 'home produced' ques-tionnaires, checklists and observations in that testsare designed in such a way that everyone is given thesame task and a standard set of instructions fordoing it.

People administering the test are given detailedinstructions for preparing candidates, administeringthe task and for scoring and interpreting it.

Because tests have these properties they allow thetrained test user to make objective statistically-basedjudgements and predictions about a range of issues.For example:

a person's capacity or potential to act orbehave in certain waysthe likelihood that they will be able to copewith the demands of a training coursetheir potential for success in certain types ofjob.

Objective tests fall into two broad categories: mea-sures of maximum performance and measures oftypical performance.

Measures of maximum performance measure howwell a person can perform. They have right andwrong answers.

Measures of typical performance measure people'spreferences, styles and modes of behaviour. Theymeasure their interests and personality; their valuesand what motivates them; the attitudes and beliefsthey hold.

The present guide is only concerned with the first ofthese.

Tests of maximum performance include generalability tests. These provide a good indication of aperson's potential to succeed in a wide range of dif-ferent activities. Such measures are relatively unaf-fected by the person's previous experience andlearning.

Measures of attainment or mastery, on the otherhand, specifically assess what people have learnedand the skills they have acquired (e.g. shorthand andtyping tests; knowledge of motor mechanics and soon). Where these focus on very specific aspects ofskill (e.g. punctuation, forming plurals, and so on)they are often called 'diagnostic tests'.

Vole No 7

Between these two extremes there are a number ofother types of tests: specific ability tests, aptitudetests, and work-sample tests.

Objective tests provide a means of comparing anindividual against known benchmarks (typically, theaverage performance of some defined population, orexplicit mastery criteria).

The actual number of correct answers a person getson a test is known as their raw scale score. There are,however, a number of other types of score typicallyused in testing. These various different measures arereferred to as standardised scale scores.

For some tests, all the correct items are countedtogether to produce just one scale score. For othertests, the scoring procedure may divide the items intotwo or more groups, with each group of items beingused to produce its own scale score.

So, what is a scale? To give an example, we mighthave a screening test which contained 30 itemsdesigned to measure literacy and 30 items to measurenumeracy.

If the two sets of items are presented as two separatetests we would call it a test battery (containing twotests: one for literacy and one for numeracy) and wewould have two scale scores.

Alternatively, we might use the test to provide ageneral measure of basic skills. In that case all 60items would be used to produce a single scale score.

A further possibility would be for the 30 items ineach test to be broken down into sub-scales, eachmeasuring some distinct aspect of literacy (spelling,use of punctuation, etc.) or numeracy (fraction todecimal conversion, multiplication, etc.).

The extent to which we can break down a scale intomeaningful subscales depends on how theinstrument was constructed, and what degree ofaccuracy we need in our measurement. In general,the more you break scales down into components,the less accurate the component measures become.

There is a very real danger of people taking testsdesigned for screening purposes and then over-inter-preting them. For example, a general screening testfor numeracy would probably contain items cov-ering addition, subtraction, multiplication, division,conversion between decimals and fractions, use ofpercentages, and so on. This does not mean that youcan look at a person's performance on each type ofitem and diagnose their relative strengths and weak-nesses. You can only do that if the test was designedso that each item type produces a scale score (ofknown reliability and validity).

FE matters 4 41

Page 44: 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development

Comparing scores with other people's

scores: using norm-referenced scoresA norm-referenced score defines where a person'sscore lies in relation to the scores obtained by certainother people. This group of other people is known asthe norm group. The reason for using norm-refer-enced scores is to see whether the person tested isbelow average, average or above average withrespect to the performance of the norm group. Suchscores are relative measures as they depend on whothe 'other people' are. For example, a numeracyscore which may be average for one group, mayappear `low' when compared against a universitygraduate norm group, and 'high' when comparedagainst a sample of people drawn from a populationof school leavers from a depressed inner-city area.

Typically norm-referenced scores are expressedeither as percentiles or percentile-based grades or onone of a number of standard score scales (e.g. stenscores and T-scores), for example:

The Basic Skills Test (NFER-NELSON) uses nor-malised T-scores and percentile scores (and gives68% true score confidence bands).

The Foundation Skills Assessment (PsychologicalCorporation) provides percentile points, Stens,Stanines and ratio scale scores which allow com-parison between the different levels (A, B and C) ofthe test.

Comparing scores against a standard:

using criterion-referenced scoresTo make judgements about a person's ability to copewith the demands of a job or training course, testscores have to be related to performance on the 'cri-terion' task e.g. the training course. This is typi-cally done in one of two ways.

In one approach, 'experts' make judgements, basedon an examination of the content of the test itemsand of an analysis of the demands which will bemade on people by the course, about what minimumtest scores would be required for a person to be ableto cope with particular aspects of a training pro-gramme. This method is variously referred to asdomain-referencing, content-referencing (and also,confusingly, criterion-referencing). This is the mostcommon approach, widely used in educationalassessment, and likely to be the method used by mostFE Colleges. The Basic Skills Assessment (The Basic

42

Skills Agency) adopts this approach producing rawscores which are criterion-referenced in relation tothree stages: Foundation, Stage 1 and Stage 2.

The second method involves making statistical pre-dictions of future performance from a person'sscores (called predictive validation). This uses actualdata about the relationship between test scores andeducational attainment or training course perfor-mance. Establishing this sort of relationship is espe-cially important when the results of tests are used inthe process of selecting people for jobs or trainingcourses as opposed to post-selection evaluation oftheir support needs.

Objective tests have quantified levels of accuracy ofmeasurement and a body of evidence supportingtheir claim to measure what they say they measure.

One of the properties which distinguishes objectiveassessment from other forms of assessment is thequality and quantity of the data available about theinstrument. The interpretation of any scale score isaided by:

information about how scores aredistributed in the general population and inother more specific groups of peopleinformation about variations in patterns ofscore distribution related to demographicvariables such as age, gender, ethnic groupand so oninformation about variations in patterns ofscore distribution related to occupational oreducational criteria.

Information on the effects of demographic variablesis very important in judging the suitability of aninstrument and in aiding interpretation of it. Oneshould always look for data on these variables, forexample, whether scores on each scale differ betweenmales and females or not, whether they are subject toethnic group bias effects, whether the instrument hasbeen used on populations similar to those you wouldbe assessing.

FE matters4 4

Vol 2 No 7

Page 45: 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development

APPENDIX 3:HOW ARE DIAGNOSTIC TESTSDESIGNED AND DEVELOPED?

This appendix focuses on the development of diag-nostic tests. It assumes that there is already ascreening testing process in place identifying peoplewith probable learning support needs. Diagnostictests are then used to specify in more detail thenature of those needs and the extent to which theywill require support.

In principle, the process of developing diagnostictests is straightforward. The difficulty and com-plexity lie in the details of doing it properly andemerging with a robust and useful instrument.Specialist technical assistance will be needed to dothis properly. Most of the technical difficulties arisetowards the end of the process (and relate to dataanalysis and technical documentation issues).However, you are strongly advised to seek the adviceof a chartered psychologist with specialist skills intest development from the start. They will be able toguide you through the initial stages so that you standthe best chance of producing an analysable anduseful set of data at the end.

Profile the demands which the programme will makeon the learner

Break down the course programme content in termsof the demands the course materials and content willmake on the learner. As well as helping to definelevels below which support will be needed, this alsoprovides an opportunity for checking whether theselevels of demand are appropriate for the course. Forexample, could the course objectives still be met ifdifficult materials were revised to make them lessdemanding of basic skills? Costly learning supportprovision could sometimes be reduced by changingthe demands made on learners by the course.

Define the essential learning skills required to copewith these demands

Having profiled the demands, identify the skillsrequired to cope with these: for example literacy,numeracy, communications, IT, etc.

Identify the minimum necessary levels of skill neededin each area

In each area, ask what the minimum necessary levelof skill would be for someone to be able to cope withthe course content. This should be the level belowwhich the person's lack of skill will start to get in theway of, or interfere with, their ability to keep upwith the course and maintain steady progress.

VOI 2 No 7

Devise or choose tasks which will assess the skillsconcerned

Having identified the relevant areas of skill, choosetasks which will assess those skills. This may beobvious in some cases (e.g. coping with fraction todecimal conversions), less so in others (levels of lit-eracy needed to understand the range of readingmaterials which accompany the course work).

Set the difficulty of the tasks for optimum discrimi-nation around the minimum necessary level

In designing the tasks for the test, aim at the peopleon the borderline of the level of competence neededto cope with the course. They should be able to getabout 50% of the questions right (assuming open-ended answers, or multiple choice with correctionsfor guessing). If the test is too easy or too hard, it willnot identify the deficits you are looking for. Gettingthis right is very difficult. A common mistake is tomake the test representative of the course materials.This is wrong because much of the course may notbe problematic. The test should be selectivefocusing on areas where there is likely to be difficultyfor those identified as 'at risk' by the generalscreening process.

Ensure the content of the items has sufficient scopeto cover the range required

This is a similar point to the one made above. Theitems, or questions, in the test need to cover all therelevant areas. This is vital for diagnostic instru-ments. Never assume that there is a problem in onearea simply because you have diagnosed a problemin some other area. People are likely to differ consid-erably in the extent to which their learning supportneeds are general or specific.

Ensure there are sufficient items to give a reliablemeasure

The main shortcoming of the 'home-grown' varietyof diagnostic tests is that they do not contain suffi-cient items for a reliable diagnosis to be made. It isnot uncommon for tests produced in colleges tocontain only one or two items relating to a diag-nostic category. This is quite insufficient to diagnoseproblems at an individual level. As a rough andready rule of thumb, the minimum number of itemsper diagnostic category should be between 6 and 12.The number really depends on how narrow or broadthe category is. For very narrow, highly specific ones,you may be able to produce reliable scales with onlysix items.

A related issue is that when devising your own mate-rials, you need to make provision for the fact thatsome of the items you produce will not work as you

FE matters 4 5 43

Page 46: 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development

expected them to. Most test developers reckon ontrialing as many as twice the number of items theyexpect to end up with in the final test. So, if you aredevising a test to diagnose difficulties in six differentbut fairly specific areas, you would probably need togenerate over 100 items in the first instance.

Some forms of test do not use 'items' in the conven-tional sense of the word (that is discrete questions).You may instead use passages of text about which anumber of questions are asked; you may ask peopleto write free-text which is then content-analysed;and so on. While it is important that the form andcontent of the test materials are relevant to what isbeing assessed, the issue of reliability and robustnessremains. For even the most open-ended form ofassessment, you need a well-defined scoring protocolwhich determines what 'items' of data are obtained.These item scores should be treated, for statisticalpurposes, just as the scores you would obtain withclosed, multiple-choice test questions.

A final factor to consider in designing test items andensuring there are sufficient, is inter-dependence. Ifyou present people with a short passage of text, andthen ask 10 questions about it, there is likely to beinterdependence between the accuracy of theanswers because they all relate to a common 'stem'.Technically, this means that a measure of the relia-bility of this test will be artificially inflated. Anextreme example would be asking the same question10 times. You would have a highly reliable, but notvery useful measure. So, one needs to be carefulabout how far responses may be inter-dependent: forexample, where getting Question 1 wrong impliesyou are likely to get Questions 2-10 wrong as wellbecause they all relate to the same material.

Pilot test the items using people with known learningsupport needs or known levels of literacy ornumeracy

The best way of getting data on whether the test isright or not, is to try it out. The better defined thesample of people it is tried out on, the better thequality of the information obtained. At this stageyou are not looking for large numbers of peoplethat comes later. You do, however, want to be able tocheck that the types and levels of skill you areintending to measure can be identified in peoplealready known to have those levels and types of skill

44

.Ask people with relevant expertise to check the itemdomain and difficulty. This can be done usingsorting and rating tasks

In addition, you can use expert judgement to helpcheck out the item design. A common approach tothis is to get a small group of experts (five or six willdo), and ask them to carry out a sorting task.

The task is to sort the test questions into piles, eachpile relating to one of the characteristics you areattempting to measure. Suppose you were trying todiagnose problems in three aspects of numeracy, andhad designed items to be appropriate to 'Foundation'level skill. You provide your experts with a matrix inwhich to sort the items: the rows would be difficultylevels: 'Foundation', 'Level 1', and 'Level 2', say. Thecolumns would be heading with the various aspectsof numeracy into which the items could be classified

together with a 'Don't know' column. Each item inthe test is written on a separate card, and the expertsare given a pack of cards and asked to place them inthe appropriate cells in the matrix. Your experts canbe course team members, or others who would be ina position to make the necessary judgements.

When this has been done by all your experts, youneed to see whether there is clear agreement betweenthem. Items placed in the 'Don't know' column bymore than one person, or those placed in more thanone cell should be looked at very carefully ordropped.

Trial the test

Having carried out the pilot testing and the expertsorting, you will now have fewer items than youstarted with. (For example, an initial set of 100 ques-tions may be down to about 70). These now formthe basis for getting some real data under propertesting conditions. In order to do any useful objectiveappraisal of the test, you will need to get around 100people (preferably more) from the target population

that is, those people with whom the test is to beused.

Do item analysis to see if the items are working asintended

This is where you may need to call on specialist help.However, there are some simple things you can lookat. For example, what proportion of people get eachitem right? Any item which is got right by nearlyeverybody or by hardly anyone, is not going to be ofany use, as it will not enable you to discriminatebetween people. So, you would normally discarditems which have very high or very low scores (typi-cally, more than 90% correct or less than 10%correct). The exact criteria depend on a range of

FE matters VO1 2 No 7

Page 47: 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development

matters: for example, what sort of items they are,what the guessing rates are, what the constitution ofthe trial population was. This last point is veryimportant. If you trial the test with a sample ofpeople who do not have the relevant learning diffi-culties, then they will be likely to have very highscores on all the items. On the other hand, if yourtrial group only contains people with theseproblems, the items will have very low scores. Thenormal guidelines, then, only apply when your trialgroup is a reasonable mixture of people with andwithout the target problems.

Other scale-construction processes which need to becarried out at this stage are reliability analysis, exam-ination of item discriminations and, possibly, prin-cipal components analysis. These require specialistsoftware and specialist knowledge. It may also bepossible, depending on how your trial sample ofpeople is constituted, to carry out a preliminaryvalidity analysis. If you know which people had andwhich did not have the target difficulties, you can seewhat proportion of them were correctly identified bythe test. Again, this sort of analysis requires specialisthelp. Where target groups are well defined within thetrial sample, discriminant function analysis can beused to develop prediction scores.

Revise the test this stage is to drop items which areredundant or do not work. This is done on the basisof the scale construction analysis work. All the finalvalidation and reliability analysis is done on therevised test not the original set of items which wereused.

Establish criterion points and cut-off scores

It may be possible to set provisional cut-off scores onthe new test using the trial sample data or it may benecessary to get more data, using the final version ofthe test, to do this. Again, you may need specialistadvice on this.

Document the technical information

Once all this work has been completed, you mustensure that the technical information about the testis properly documented. This will normally be inaddition to, and separate from, any documentationyou might produce for those staff using the test on aday-to-day basis. It is doubly vital to produce goodtechnical documentation if you intend offering thetest to other institutions.

Producing technical documentation is a specialistjob, and one which staff in FE colleges are unlikelyto have the necessary expertise for. It is a task outsidethe range of competence of the average test user.

VO1 2 No 7

Follow use of the test

Once the test has been developed, it is vital that itsvalue is assessed. This means following all the out-comes:

How accurately does it identify programmespecific support needs?Is it adding anything to the informationobtained from screening or other sources?What are the false positive and falsenegative rates?Would it be worth investing further timeand effort in making improvements?

The answers to these questions will all add to thevalue of the instruments and should be part of thetechnical documentation. If you are planning to sellyour materials to others it is even more important tomake sure you have evidence to support whateverclaims you make about them.

If you are going to distribute your test materials(either free of charge or for gain) you need to makeclear to potential users how the tests should be usedand what they can and cannot do with the materials(in terms of making changes, adapting them for theiruse, photocopying, passing them on to others, etc.).

FE matters

BEST COPY AVAILAR1.,F

4 7 45

Page 48: 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development

ReferencesFurther Education Development Agency (1995)Managing Assessment. FEDA, Blagdon.

Further Education Funding Council (1996) InclusiveLearning: report of the Learning Difficulties and/orDisabilities Committee. FEFC The 'Tomlinson'Report

Further Education Funding Council (1997a) How toapply for Funding 1996-7. FEFC.

Further Education Funding Council (1997b)Measuring achievement: further education collegeperformance indicators 1994-9. FEFC

Green, M (1998) Different approaches to learningsupport. FEDA.

46 FE matters

LIS

Vol 2 No 7

Page 49: 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development

FEDA publication seriesDeveloping FE: Volume

Student tracking2 Case loading3 Assessing the impact: provision for learners with

learning difficulties and disabilities4 Adults and GNVQs

5 On course for next steps: careers education andguidance for students in FE

6 Marketing planning7 Managing change in FE8 The effective college library9 Appraisal in FE where are we now?so Clarity is power: learner outcomes, learner

autonomy and transferable skills

FEDA Reports (formerly Developing FE):

Volume 2

Investing partners: further education, economicdevelopment and regional policy

2 Women at the top in further education3 Moving into FE: the voice of the learner4 Additional support. retention and guidance in

urban colleges5 Qualifications for the future: a study of tripartite and

other divisions in post-16 education and training

Vol 2 No 7

FE Matters: Volume

Environmental education throughout FE 1: policyand strategy

2 Environmental education throughout FE 2: a modeland unit of environmental learning outcomes

3 Colleges working with industry4 Towards self-assessing colleges5 Evidence for action: papers prepared for FEFC's

Learning and Technology Committee6 Student retention: case studies of strategies

that work7 Getting the credit: OCN accreditation and learners

with learning difficulties and disabilities8 Moving on from Key Stage 4 the

challenge for FE9 Monitoring student attendance10 Educational psychologists in further education

Assuring coherence in individual learningprogrammes

12 Adult learners: pathways to progression13 A real job with prospects: supported employment

opportunities for adults with learning difficultiesand disabilities

14 Transforming teaching: selecting and evaluatingteaching strategies

15 Information and learning technology:a development handbook

16 Delivering Modern Apprenticeships17 Planning a merger of FE colleges

18 Tackling drugs together: addressing the issues inthe FE sector

19 Security is not an option learning in asafe environment

20 Give us some credit: achieving a comprehensiveFE framework

FE Matters: Volume 2

Youth work in colleges: building on partnership2 The shrinking world: international links with FE3 A sense of achievement: outcomes of adult learning4 Learning with business: FE staff development to

business and industry5 Beyond responsiveness: promoting good practice in

economic development6 Making learning support work

FE matters 4 47

Page 50: 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development

FEDA Bulletins: Volume 1

1 Quality assurance in colleges2 The impact of voucher schemes on the FE

curriculum3 Enhancing GCE A-level programmes

4 Developing college policies and strategies forhuman resource development

5 Maintaining quality during curriculum change6 Action planning and recording achievement7 Implementing modular A-levels8 Comparing content in selected GCE A-levels and

Advanced GNVQs

9 Engineering the future: monitoring the pilotGNVQ in engineering

io Charters in FE: making them work11 Access to accreditation12 Back to the future: what are Modern

Apprenticeships?13 Competing for business: colleges and the

Competitiveness Fund14 Information systems: a strategic approach15 Strategic approaches to processes, cultures and

structures

FEDA Bulletins: Volume 2

1 Partners in economic developement2 Assuring quality in collaborative provision3 Student stress in an FE college: an empirical

study

50

48 FE matters Vol 2 No 7

Page 51: 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development

FE matters

In reviewing the management andimplementation of initial assessment,colleges need to reflect on its purposesand the extent to which existing policies,structures, roles and responsibilitiesare helpful. This comprehensive reportis designed to support curriculummanagers, student service managersand learning support managers,and contains:

examples of colleges' practicalapproachesadvices on technical aspects,including definitionstools and techniques, includingquality criteriaadvice to colleges including adraft code of practice.

Y 11411

ISSN 1361-9977 FE matters Vol 2 No 7 Price f7.50

Page 52: 51p. - ERICDOCUMENT RESUME ED 417 346 CE 076 101 AUTHOR Green, Muriel; Bartram, Dave TITLE Initial Assessment to Identify Learning Needs. INSTITUTION Further Education Development

Ei

(9/92)

U.S. DEPARTMENT OF EDUCATIONOffice of Educational Research and Improvement (OERI)

Educational Resources information Center (ERIC)

NOTICE

REPRODUCTION BASIS

IC

This document is covered by a signed "Reproduction Release(Blanket)" form (on file within the ERIC system), encompassing allor classes of documents from its source organization and, therefore,does not require a "Specific Document" Release form.

This document is Federally-funded, or carries its own permission toreproduce, or is otherwise in the public domain and, therefore, maybe reproduced by ERIC without a signed Reproduction Releaseform (either "Specific Document" or "Blanket").