Review: Performance-Based Assessments Performanc-based assessment Real-life setting H.O.T.S....

Post on 04-Jan-2016

214 views 1 download

Transcript of Review: Performance-Based Assessments Performanc-based assessment Real-life setting H.O.T.S....

Review: Performance-Based Assessments

• Performanc-based assessment• Real-life setting

• H.O.T.S.

• Techniques:• Observation• Individual or Group Projects• Portfolios• Performances• Student Logs or Journals

• Developing performance-based assessments• Determining the purpose of assessment

• Deciding what constitutes student learning

• Selecting the appropriate assessment task

• Setting performance criteria

Review: Grading

• Grading process:

• Making grading fair, reliable, and valid• Determine defensible objectives• Ability group students• Construct tests which reflect objectivity• No test is perfectly reliable• Grades should reflect status, not improvement• Do not use grades to reward good effort• Consider grades as measurements, not evaluations

Objectivesof instruction

Test selectionand administration

Results comparedto standards

Finalgrades

Cognitive Assessments

PhysicalFitnessKnowledge

HPHE 3150Dr. Ayers

PhysicalFitnessKnowledge

Test Planning

• TypesMastery (driver’s license)

Meet minimum requirements

Achievement (mid-term)

Discriminate among levels of accomplishment

Table of Specifications(content-related validity)

• Content Objectiveshistory, values, equipment, etiquette, safety, rules, strategy, techniques of play

• Educational Objectives (Blooms’ taxonomy, 1956)

knowledge, comprehension, application, analysis, synthesis, evaluation

Table of Specifications for a 33Item Exercise Physiology Concepts Test

(Ask-PE, Ayers, 2003)

T of SPECS-E.doc

Test Characteristics

• When to test• Often enough for reliability but not too often to be useless

• How many questions (p. 145-6 guidelines)• More items yield greater reliability

• Format to use (p. 147 guidelines)• Oral (NO), group (NO), written (YES)• Open book/note, take-home

• Advantages: ↓anxiety, ask more application Qs• Disadvantages: ↓ incentive to prepare, uncertainty of who does work

Test Characteristics

• Question types• Semi-objective

• short-answer• completion• mathematical

• Objective • t/f• Matching• multiple-choice• Classification

• Essay

Semi-objective Questions

• Short-answer, completion, mathematical

• When to use (factual & recall material)

• Weaknesses

• Construction Recommendations (p. 151)

• Scoring Recommendations (p. 152)

Objective Questions

• True/False, matching, multiple-choice

• When to use (M-C: MOST IDEAL)• FORM7 (B,E).doc• Pg. 160-3: M-C guidelines

• Construction Recommendations(p. 158-60)

• Scoring Recommendations (p. 163-4)

Figure 8.1

The difference between extrinsic and intrinsic ambiguity(A is correct)

A

B

C

AAB B

C

CDD

D

Too easy Extrinsicambiguity(weak Ss miss)

IntrinsicAmbiguity(all foils = appealing)

Cognitive Assessments I

• Explain one thing that you learned today to a classmate

Review: Cognitive Assessments I

• Test types• Mastery Achievement

• Table of Specifications• Identify content, assign cognitive demands, weight areas• Provides support for what type of validity?

• Questions Types• Semi-objective: short-answer, completion, mathematical• Objective: t/f, match, multiple-choice

• Which is desirable: intrinsic/extrinsic ambiguity

Essay Questions

• When to use (definitions, interpretations, comparisons)

• Weaknesses

• Scoring

• Objectivity

• Construction & Scoring recommendations (p. 167-9)

Characteristics of “Good” Tests

• Reliable

• Valid

• Average difficulty

• DiscriminateGotten correct by more knowledgeable studentsMissed by less knowledgeable students

• Time consuming to write

Quality of the Test

• Reliability • Role of error in an observed score• Error sources in written tests

• Inadequate sampling

• Examinee’s mental/physical condition

• Environmental conditions

• Guessing

• Changes in the field (dynamic variable being measured)

Quality of the Test

• Validity• CONTENT key for written tests• Is critical information assessed by a test?• T of Specs helps support validity

• Overall Test Quality• Based on individual item quality (steps 1-8, pg. 175-80)

Item Analysis

• Used to determine quality of individual test items

• Item DifficultyPercent answering correctly

• Item DiscriminationHow well the item "functions“Also how “valid” the item is based on the total test score criterion

Item Difficulty

100*nn

cc

LU

LUDifficulty

0 (nobody got right) – 100 (everybody got right)Goal=50%

This allows for max item discrimination

Item Discrimination

100*minn

cc

U

LUationDiscri

<20% & negative (poor) 20-40% (acceptable)Goal > 40%

+ discr: incr reliability -: decr reliability

Figure 8.4The relationship between item discrimination and difficulty

Moderate difficulty maximizes discrimination

Sources of Written Tests

• Professionally Constructed Tests (FitSmart, Ask-PE)

• Textbooks (McGee & Farrow, 1987)

• Periodicals, Theses, and Dissertations

Questionnaires

• Determine the objectives• Delimit the sample• Construct the questionnaire• Conduct a pilot study• Write a cover letter• Send the questionnaire• Follow-up with non-respondents• Analyze the results and prepare the report

Constructing Open-Ended Questions

• AdvantagesAllow for creative answersAllow for respondent to detail answersCan be used when possible categories are largeProbably better when complex questions are involved

• DisadvantagesAnalysis is difficult because of non-standard responsesRequire more respondent time to completeCan be ambiguousCan result in irrelevant data

Constructing Closed-Ended Questions

• AdvantagesEasy to codeResult in standard responsesUsually less ambiguousEase of response relates to increased response rate

• DisadvantagesFrustration if correct category is not presentRespondent may chose inappropriate categoryMay require many categories to get ALL responsesSubject to possible recording errors

Factors Affecting the Questionnaire Response

• Cover LetterBe brief and informative

• Ease of ReturnYou DO want it back!

• Neatness and LengthBe professional and brief

• InducementsMoney and flattery

• Timing and DeadlinesTime of year and sufficient time to complete

• Follow-upAt least once (2 about the best response rate you will get)

The BIG Issues in Questionnaire Development

• ReliabilityConsistency of measurementStability reliability: 2-4 wks between

administrations

• ValidityTruthfulness of responseGood items, expert reviewed, pilot testing,

confidentiality/anonymity

• Representativeness of the sampleTo whom can you generalize?

Cognitive Assessments II

Ask for clarity on something that challenged you today