Broadening Participation in Computing (BPC) Alliance ... · PDF fileBroadening Participation...
-
Upload
trinhquynh -
Category
Documents
-
view
217 -
download
2
Transcript of Broadening Participation in Computing (BPC) Alliance ... · PDF fileBroadening Participation...
Broadening Participation
in Computing (BPC)
Alliance Evaluation
Workshop
Patricia Campbell, PhD
Campbell-Kibler Associates, Inc.
Evaluation Basics: Soup, Cooks,
Guests & Improvement
When cooks taste the soup, it’s formative evaluation; the collection of information that can be used to improve the soup. If necessary, the cook’s next step is to explore strategies to fix the problem. The cook makes some changes and then re-tastes the soup, collecting more formative evaluation data.
When the guests taste the soup at the table, they’re doing summative evaluation. They are collecting information to make a judgment about the overall quality and value of soup. Once the soup is on the table and in the guests’ mouths, there is little that can be done to improve that soup.
Thanks to Bob Stake for first introducing this metaphor.
Challenging Assumptions
When I was a physicist people would often come
and ask me to check their numbers, which were almost always right. They never came and
asked me to check their assumptions, which were almost never right.
Eli Goldratt
Pat’s Evaluation Assumptions
The core evaluation question is “What works for whom in what context?”
“Black hole” evaluations are bad.
If you aren’t going to use the data, don’t ask for it.
A bad measure of the right thing is better than a good measure of the wrong thing.
Acknowledging WIIFM increases response rates.
Process is a tool to help understand outcomes.
Outcomes are at the core of accountability.
Some Thoughts on Measurement
Don’t reinvent the wheel; where possible use existing measures
Share measures with other projects. Common questions can
be useful.
Look for benchmark measures that are predictors of your longer term goals.
All self developed measures need some checking for validity
and reliability.
A sample with a high response rate is better than a population with a low one.
Some Web-based Sources of Measures
OERL, the Online Evaluation Resource Library.
http://oerl.sri.com/home.html
ETS Test Link (a library of more than 25,000
measures). http://www.ets.org/portal/site/ets/menuitem.148851
2ecfd5b8849a77b13bc3921509/?vgnextoid=ed462d3
631df4010VgnVCM10000022f95190RCRD&vgnextcha
nnel=85af197a484f4010VgnVCM10000022f95190RCR
D
Sample Under-represented Student
Instruments From OERL
http://oerl.sri.com/home.html
Attitude Surveys
Content Assessments
Course Evaluations
Focus Groups
Interviews
Journal/Log Entries
Project Evaluations
Surveys
Workshop Evaluations
Compared to What? Evaluation Designs
Experimental designs
Quasi-experimental designs
Mixed methods designs
Case studies
NSF does not promote one design, rather it wants the design that will do the best job answering your evaluation questions!
Precent of Under-Represented STEM Students in
17 Project Colleges
0%
5%
10%
15%
20%
2000 2001 2002 2003 2004
Percent of Under-Represented STEM Students in
17 Project and 17 Comparison Colleges
0%
5%
10%
15%
20%
2000 2001 2002 2003 2004
Project Colleges
Comparison Colleges
Making Comparisons: Why Bother
Web-based Sources of Comparisons: K-
12
By state, all public schools have web-based school report cards that include grade level student achievement test scores on standardized mathematics and language arts/reading tests, often disaggregated by race/ethnicity and by sex, for a period of years.
The U.S. Department of Education’s Common Core of Data (CCD) (http://www.nces.ed.gov/ccd/CCD) reports public school data including student enrollment by grade, student demographic characteristics, and the percent of students eligible for free or reduced price lunches.
Comparison schools can be selected using the CCD and achievement data for both sets of schools over time can be downloaded from states’ report cards.
Caveats on using Web-based Sources of
Comparisons: K-12
These data can be used only if:
the goal of the strategy/project is to increase student
achievement in a subject area tested by the state; participating students have not yet taken their final state
mandated test in that subject area;
most of the teachers in a school teaching in that subject
area are part of the strategy/project, and/or most of the students studying the subject area are part of that strategy/project.
Web-based Sources of Comparisons:
College and University
WebCASPAR database (http://caspar.nsf.gov) provides free access to institutional level data on students from surveys as Integrated Postsecondary Education Data System (IPEDS) and the Survey of Earned Doctorates.
The Engineering Workforce Commission (http://www.ewc-
online.org/) provides institutional level data (for members) on bachelors, masters and doctorate enrollees and recipients by sex by race/ethnicity for US students and by sex for foreign students.
Comparison institutions can be selected from the Carnegie Foundation for the Advancement of Teaching’s website, (http://www.carnegiefoundation.org/classifications/) based on Carnegie Classification, location, private/public designation, size and profit/nonprofit status.
Some Web-based Sources of Resources
OERL, the Online Evaluation Resource Library. http://oerl.sri.com/home.html
User Friendly Guide to Program Evaluation http://www.nsf.gov/pubs/2002/nsf02057/start.htm
AGEP Collecting, Analyzing and Displaying Data http://www.nsfagep.org/CollectingAnalyzingDisplayingData.pdf
American Evaluation Association http://www.eval.org/resources.asp
OERL, the Online Evaluation Resource
Library. http://oerl.sri.com/home.html
Includes NSF project evaluation plans, instruments, reports and professional development modules on
Designing an Evaluation
Developing Written Questionnaires
Developing Interviews
Developing Observation Instruments
Data Collection
Instrument Triangulation and Adaptation.
User Friendly Guide to Program Evaluation
http://www.nsf.gov/pubs/2002/nsf02057/start
.htm
Introduction
Section I - Evaluation and Types of Evaluation
Section II - The Steps in Doing an Evaluation
Section III - An Overview of Quantitative and
Qualitative Data Collection Methods
Section IV - Strategies That Address Culturally
Responsive Evaluations
Other Recommending Reading, Glossary, and Appendix A: Finding An Evaluator
AGEP Collecting, Analyzing and
Displaying Data
http://www.nsfagep.org/CollectingAnalyzi
ngDisplayingData.pdf
I. Make Your Message Clear
II. Use Pictures, Where Appropriate
III. Use Statistics and Stories
IV. Be Responsive to Your Audience.
V. Make Comparisons
VI. Find Ways To Deal With Volatile Data
VII. Use the Results
Some Thoughts for Discussion
1. What evaluation resources do you have you would like to share? What evaluation resources would you like to have from others?
2. What can you (and we) do to make your evaluation results known to and useful to:
your project? other projects?
Jan?
the broader world?
3. Why do you think your strategies will lead to your desired outcomes? Use research, theory or just plain logic