The Collegiate Learning Assessment (CLA) Project Roger Benjamin RAND Corporation’s Council for Aid...

40
The Collegiate Learning Assessment (CLA) Project Roger Benjamin RAND Corporation’s Council for Aid to Education October 10, 2003

Transcript of The Collegiate Learning Assessment (CLA) Project Roger Benjamin RAND Corporation’s Council for Aid...

The Collegiate Learning Assessment (CLA) Project

Roger Benjamin

RAND Corporation’s Council for Aid to Education

October 10, 2003

Themes

• Why Measure Educational Outcomes?

• Obstacles to Overcome

• The CLA Approach in Context

• Feasibility Study Results

• An Opportunity to Participate

Why Measure Educational Outcomes?

• Improve educational programs

• Demand for accountability– Rising costs– Reduced budgets– Competition from distance learning

Changing Context for CLA (1)

• Accountability drive continues to mount– Bush administration likely to place

performance measures in Higher Education Reauthorization Act

– Tension between higher education leaders and state leaders appears to be increasing

– Strong interest in assessment among private higher education institutions

• Participation/attainment gap between ethnic/racial groups continues to widen

Changing Context for CLA (2)

• Budget Crisis– Private colleges: Endowments have declined

significantly– Public colleges: 43 states exhibit medium to

severe deficits, totaling $78 billion

• Tuition increases sharply– 10% in during ‘02–‘03 / ‘03–‘04 increases

could be higher

The State Has A Critical Role in Higher Education

• The state provides the instructional budget and infrastructure support

• The state sets objectives for– Educational levels to be achieved by entering

students– Participation rates by minority groups– Minimum passing scores for professional

school graduates

Basic Methodological Hurdles to Overcome

• Direct comparisons between states problematic

• Comparing aggregated scores of institutions at the state level flawed

• Use of proxy measures problematic because of selection bias

Are State-Based Comparisons Possible?

• States may conduct comparisons over time within their states

• States may wish to establish minimum performance levels and benchmark them against the same measures in states judged most similar to them.

Institutional Barriers to State-Based Accountability Movement

• Structure of higher education governance not conducive to top-down policy strategies

• In particular, state-based strategies confront norms that cede decision making regarding pedagogy and curriculum, including assessment to the faculty

The Link Between Productivity, Accountability and Assessment

• There must be a metric against which to evaluate the productivity concepts

• The quality of student learning outcomes is the only serious candidate

• Moreover one cannot introduce accountability until standards of performance are set

• However, unless the assessment strategy is acceptable to faculty little progress can be expected

Competing Visions

• Faculty use assessments that are focused on improving curriculum and pedagogy and more likely to be focused on the department or institution and not interested in inter-institutional comparisons

• State-based approaches are focused on accountability, aggregate data to the state level, and use proxy measures

Issues to Solve

• Performance measures may offer opportunity to reconcile the goals and approaches of the state and institutions of higher education but agreement on rules of engagement need to be worked out

• Consensus on measures, approach, and what is to be reported must be reached

Current Approaches

– Accreditation Review (inputs)– Actuarial indicators (graduation rates &

access)– Faculty surveys (US News & World Report)– Student surveys (NSSE & CIRP)– Direct measures of student learning

Problems with Direct Measures

• No common core curriculum• Too many academic majors• Course grades are professor/school specific• Gen Ed skills limited sensitivity to instruction • Graduate/Professional school admission tests

are not appropriate because:• Too few students take them• Selection bias in who takes them• Not focused on educational outcomes

SampleCLA Performance Measure:

“Crime Reduction”

Sample CLA Performance Measure

“Crime Reduction”

The Task

“Jamie Eager is a candidate who is opposing Pat Stone for reelection. Eager critiques the Mayor’s solution to reducing crime by increasing the number of police officers. Eager proposes the city support the a drug education program for addicts because, according to Eager, addicts are the major source of the city’s crime problem.”

“Mayor Pat Stone asks you to do two things (1) evaluate the validity of Eager’s proposal and (2) assess the validity of Eager’s criticism of the mayor’s plan to increase the number of officers.”

The Documents

“Mayor Stone provides you with various documents related to this matter, but warns you that some of them may not be relevant. Your task is to review these materials and respond to the mayor’s request in preparation for tomorrow night’s public debate with Eager.”

Memo

Newspaper Article

Crime Statistics

Crime and Drug Use Tables

Crime Statistics

Research Brief

Crime Rates Chart

Research Abstracts

Feasibility Study Measures

– Six 90-minute CLA Performance Measures– Two types of GRE writing prompts– NSSE questionnaire– SAT (or converted ACT) score– Cumulative GPA– Task evaluation form

Sample

– 14 Schools varied greatly in:• Size• Type• Location• Student characteristics

– About 100 students/school (total N = 1360)

– Roughly equal N’s per class within a school

– Not a random sample, participation optional

Small but Significant Class Effects

– After controlling on SAT scores and school– Mean test battery scale score increase relative

to freshman (sd = 150):10 pts Sophomores

27 pts Juniors

38 pts Seniors

School Effects

800

900

1000

1100

1200

1300

1400

800 900 1000 1100 1200 1300 1400Total scaled SAT score

Averagescaled

taskscore

Feasibility Study Conclusions

– General approach is sound for measuring school (as distinct from individual student) effects

– Computer scoring of answers to GRE prompts works reasonably well and saves money

– An acceptable 3-hour test package would contain one 90-minute task and two GRE prompts

– Some tasks may interact with academic major

CLA Administration: CAE Will…

• Provide information on assembling the sample

• Provide templates for letters to use in recruiting students

• Provide guidelines for proctoring the session(s)

Campus Representatives Have Flexibility In…

• Scheduling the sessions

• Campus representatives will need to– Collect registrar data– Collect IPEDS data

Two Approaches

• Cross-Sectional Studies

• Longitudinal Studies

Cross-Sectional Studies

• During spring term, 100 seniors and 100 sophomores sampled. Analyses will permit value-added comparisons between institutions.

• If subsequent fall term freshmen/first-year students also sampled, analyses will provide more sophisticated information about value-added within institution.

Longitudinal Studies

• All fall semester freshmen/first-year students sampled.

• Students can be sampled through follow-up administrations during spring terms of their sophomore and senior years. Provides for most detailed analysis of value-added because individual variance can be controlled for.

CLA Institutional Reports

• Combining the results from the CLA measures with registrar data (students’ SAT/ACT scores and GPAs) and IPEDS data allows for analyses of patterns and trends across institutions.

CLA Institutional ReportSample Page

Motivation Strategies

• Appeal to the importance of doing well for the sake of the institution

• Create incentives for students to perform well

• Develop incentives for the institution and the student– Align tests with general education and

capstone courses– Create seminars aligned with the tests

Important Characteristics for a Successful Missouri Pilot Project• Emphasis on improvement • Useful information for improvement • Legislative support • Cost effectiveness• Contextual understanding of data • Long-term commitment – focus on trends • Multiple comparative measures • Control variables on differential student characteristics • Clear understanding of consequences • Integrated within existing assessment activity • Faculty access to illustrations of assessment tasks and feedback

reports • Incentives for participation• Diagnostic information for individual students