WAYNE STATE UNIVERSITY COLLEGE OF EDUCATION DETROIT ...
Transcript of WAYNE STATE UNIVERSITY COLLEGE OF EDUCATION DETROIT ...
WAYNE STATE UNIVERSITY COLLEGE OF EDUCATION DETROIT, MICHIGAN
INSTRUCTIONAL TECHNOLOGY 7150 EDUCATIONAL PRODUCT AND PROGRAM EVALUATION
FALL, 2002 SECTION 15077 4 Credit Hours 5:00 p.m. - 9:00 p.m. Tuesdays LOCATION Northeast Center
Bishop Gallagher High School 19360 Harper Harper Woods, MI 48225
INSTRUCTOR: James L. Moseley, EdD, LPC, CHES, CPT OFFICE HOURS: By Appointment, Department of Community Medicine,
4201 St. Antoine, Detroit, 48201 University Health Center 9D
PHONE: (313)577-7948 881-2438 (Northeast Center) 5-9 p.m. when class is in session or (313) 577-6261 daytime Monday thru Thursday
E-Mail: [email protected] Fax: (313) 577-0316
I. COURSE DESCRIPTION
"Prerequisite: IT 6110 or consent of instructor. Techniques and criteria for evaluation of commercial products; models of instructional evaluation; methods of largescale curriculum evaluation; summative evaluation; formative evaluation for review of instructional design." Wayne State University Graduate Bulletin, 2000-2002, p. 88.
This course will focus on the evaluation of educational products from two points of view:
1. Instructional materials as products, primarily those which are commercially produced
rather than those which you design for your own use.
2. Instructional programs as products, whether they are units of single course, complete school curricula, business, industry, health care, or library training programs.
Within that framework you will be introduced to the theories of evaluation, the individuals who have contributed to that theoretical base, and the procedures and materials used to conduct various types of evaluation.
1
Program evaluation is a systematic set of data collection and analysis activities undertaken to determine the value of a program to aid management, program planning, staff training, public accountability and promotion. Evaluation activities make reasonable judgments possible about the efforts, effectiveness, adequacy, efficiency and comparative value of program options. B.R. Worthen and J.R. Sanders
II. COURSE OBJECTIVES
At the conclusion of this course, you should be able to: 1. Generate your own definition of educational evaluation.
2. Discuss the role of evaluation in improving education.
3. Trace the history of evaluation in education.
4. Identify and describe at least eight (8) criteria that should be considered when evaluating
a commercially-produced educational product.
5. Define learner verification and identify examples of its application.
6. Identify at least seven (7) models of educational evaluation and the individual(s) responsible for their introduction. In addition, describe their:
a. characteristics, b. strengths, and c. weaknesses.
7. Apply models of evaluation to selected programs.
8. Compare and contrast formative, summative, and confirmative evaluation.
9. Distinguish between/among:
a. measurement and evaluation b. process and product criteria c. extrinsic and intrinsic evaluation d. norm-referenced and criterion-referenced measurement e. goal-free and goal-based evaluation f. process evaluation, impact evaluation and outcome evaluation
10. Discuss evaluation in training programs.
11. Know the following concepts and techniques for evaluating education and training
programs: Curriculum Evaluation Politics of Evaluation
2
Materials Evaluation Training of Evaluators Training Evaluation Halo Effect Audit of Evaluation Hawthorne Effect Design of Evaluation John Henry Effect
12. Design appropriate Web-based instrumentation for selected evaluation activities.
13. Discuss and know when and how to use the six (6) basic evaluation designs:
Design 1: The True Control Group, Pre/Post-Test Design Design 2: The True Control Group, Post-Test Only Design Design 3: The Non-Equivalent Control Group, Pre/Post-Test Design Design 4: The Single Group Time Series Design Design 5: The Time Series Design with a Non-Equivalent Control Group Design 6: The Before-and-After Design
Before beginning any study, it is wise to define the questions to which answers are needed. Perhaps the best way to begin the formulation of these questions is to review the theoretical perspectives and previous research in the area under consideration. Without questions to answer, the investigator will find it difficult to specify the study objectives that ordinarily provide a road map of the project to be undertaken.
R.J. McDermott and P.D. Sarvela III. COURSE PROJECTS/DUE DATES
In addition to the course objectives, you are responsible for the following assignments:
1. One detailed evaluation of a commercially-produced instructional product. The product must have a minimum of two (2) components, one of which is non-print (audiotape, videotape, slide/sound, computer-aided instruction, film-strip, etc.). You will be asked to identify your product on 9-17-02.
Written Portion: Your evaluation must be based upon the criteria described in Course Objective #4. The written report is due on 9-24-02.
Oral Portion: You are asked to give a ten minute oral presentation with media support. Reports are to be well-planned and may not exceed ten minutes in length. There will be a five minute question and answer exchange. An evaluation sheet for this assignment is attached. Presentations will be scheduled throughout the semester.
3
Due Date Summary:
Identification of Product 9-17-02 Written Report 9-24-02 Oral Presentation Throughout Semester
2. Read and be prepared to discuss the following articles from the course bibliography on
the dates indicated.
a. Hellebrandt, J. and Russell, J. D. "Confirmative Evaluation of Instructional Materials and Learners," Performance & Instruction, Volume 32, Number 6 (July, 1993), 22-27.
b. Moseley, J. And Solomon, D. “Confirmative Evaluation: A New
Paradigm for Continuous Improvement,” Performance Improvement,Volume 36, Number 5, (May - June, 1997), 12-16.
Discussion: 10-8-02
c. Parry, S. B. "How to Validate an Assessment Tool," Training, Vol. 30,
No. 4 (April, 1993), 37-41.
Discussion: 11-5-02
d. Clark, T., Goodwin, M., Mariani, M., Marshall, M., and Moore, S. "Curriculum Evaluation: An Application of Stufflebeam's Model in a Baccalaureate School of Nursing", Journal of Nursing Education, Vol. 22, No. 2 (February, 1983), 195-199.
Discussion: 10-29-02
e. Ediger, J., Synder, M. and Corcoran, S. "Selecting a Model for Use in
Curriculum Evaluation", Journal of Nursing Education, Vol. 22, No. 5 (May, 1983), 195-199.
Discussion: 10-29-02
3. One formative evaluation project conducted during the design of either:
a. instruction being designed by you; b. instruction being designed by someone else.
Specific criteria, instrumentation, and format for this project will be provided in separate handouts. Rubrics are compliments of Dr. Lynn Wietecha.
Due Date: 10-22-02
4
4. Design a Web-based Survey or Questionnaire. You can design your instrument with formative evaluation in mind or with summative or confirmative evaluation as your focal point. You are not expected to validate the instrument. Include with your instrument, a journal article supporting Web-based surveys/questionnaires.
Due Date: 11-5-02
5. One outcome (summative) evaluation project or one confirmative evaluation project
which can be: a. hypothetical program evaluation including evaluation budget; b. real life program evaluation including evaluation budget.
Specific criteria, instrumentation, and format for this project will be provided in separate handouts. This project will be a group assignment. You will be expected to submit an evaluation budget with this assignment. Rubrics are compliments of Dr. Lynn Wietecha.
Due Date: 12-3-02
Relative to the course projects, the emphasis is on your ability to apply the designated criteria, procedures and instrumentation to the evaluation of your products. Minimum performance requires that your evaluations include the specified elements, and that they follow the necessary sequence, if a sequence has been prescribed. In addition, you will be judged on evidence of comprehensiveness and conformity to accepted standards of written and oral communication. Projects which are submitted after the due date will be marked down one grade.
Evaluation questions developed in the early program-planning stages can be answered once the data have been analyzed. Descriptive statistics can be used to summarize or describe the data, and inferential statistics can be used to generate or test hypotheses. Evaluators then interpret the findings and present the results to the stakeholders, via a formal or informal report.
J.F. McKenzie and J.L. Smeltzer
IV. COURSE RESOURCES
Required:
There are three text resources for IT 7150:
1. Fitz-Gibbon, C.T. and Morris, L.L. How to Design a Program Evaluation. Beverly Hills, CA: Sage Publications, 1987 (paper).
2. Tessmer, M. Planning and Conducting Formative Evaluations. London: Kogan
Page, 1993 (paper).
3. Worthen, B.R., Sanders, J.R. and Fitzpatrick, J.L. Program Evaluation : Alternative
5
Approaches and Practical Guidelines, Second Edition. New York, N.Y.: Longman, 1997 (paper). (If the Third Edition is available by the time classes begin, purchase it).
Recommended:
4. Flagg, Barbara N. Formative Evaluation for Educational Technologies. Hillsdale,
New Jersey: Lawrence Erlbaum Associates, Publishers, 1990.
5. Herman, J. L., Morris, L. L. and Fitz-Gibbon, C. T. Evaluator's Handbook. Beverly Hills, CA: Sage Publications, 1987 (paper).
6. Phillips, Jack J. Handbook of Training Evaluation and Measurement Methods,
Third Edition. Houston, TX: Gulf Publishing Company, 1997.
7. The Joint Committee on Standards for Educational Evaluation. The Program Evaluation Standards: How to Assess Evaluations of Educational Programs, Second Edition. Thousand Oaks, CA: Sage Publications, Inc., 1994 (paper).
A series of readings from both text and periodical sources are provided in the course bibliography which, along with class presentations and handouts, will provide you with the necessary resources to complete the objectives.
V. CLASS ACTIVITY
Class Activity will include:
1. Lecture and discussion to present specific information related to course topics.
2. Individual and small group work on evaluation projects.
3. Individual presentations of evaluation by students. Oral evaluation reports will be given throughout the semester, and they can be scheduled immediately. A sign-up sheet will be distributed.
Evaluation, conducted as the program is implemented and operating, provides information which enables program managers to correct performance deficits and/or undesirable deviations from the expected course. When a program is evaluated, summatively, an opportunity to determine what worked within a program, what didn’t, and why is afforded.
C.D. Hale, F. Arnold, and M.T. Travis
6
VI. CLASS TIME
The tentative schedule of class time is as follows:
5:00 p.m. – 6:30 p.m. Lecture/Discussion; Product Evaluations
6:30p.m. - 7:00 p.m. Dinner Break 7:00 p.m. - 8:10 p.m. Text Discussion; Lecture
Discussion; Product Evaluations 8:10 p.m. - 9:00 p.m. Class Time for Feedback on Your
Progress With Your Evaluation Projects; Product Evaluations
VII. CLASS POLICIES
1. All policies stated in the Wayne State University Graduate Bulletin, 2000-2002 will be followed.
2. If a class is missed because of illness or employment demands, please contact the
instructor at 577-7948, in advance, if possible. Effective with the first class session, students who miss more than 1.5 class sessions should not expect to earn a grade of "A". Attendance is recorded and your active participation is expected.
3. Occasionally, handouts will be distributed. If you are absent, ask a fellow student to
collect the handouts for you.
4. All written work must demonstrate appropriate communication skills (e.g., spelling, punctuation, grammar).
5. All written work must meet scholarly standards as stated in the APA Publication
Manual or in the Turabian Manual.
6. All written assignments must be typewritten or computer generated.
7. Research shows that cooperative learning works. Students are expected to be prepared for class by reading the assigned topics and by intelligently discussing them with their peers.
8. Taping devices are not permitted in this class unless there is documented proof of
disability.
9. Grades will be determined by the work available to the instructor at the time of the final examination.
VIII. STUDENT GRADING:
Your grade for IT 7150 will be determined by your performance as follows:
10 points • Class Preparation and Discussion • Product Evaluation
7
10 points Oral Report 10 points Written Report 20 points • Program (Outcome) Evaluation or Confirmative
Evaluation with Evaluation Budget 20 points • Formative Evaluation 10 points • Web-based Surveys/Questionnaires 15 points • Mid Term Examination (Written and Practical)
Topics: Product Evaluation, Formative Evaluation, Outcome Evaluation(Focusing and Designing Stages).
5 points • Final Examination (Written and Practical) Topics: Outcome Evaluation, Confirmative Evaluation, Formative Evaluation, Web-based Instrumentation, Evaluation Models and Course Objectives.
__________ 100 points Total
Legend: 96-100 points = A (4.00) 90-95 points = A- (3.67) 87-89 points = B+ (3.33) 84-86 points = B (3.00) 80-83 points = B- (2.67) 77-79 points = C+ (2.33) 74-76 points = C (2.00)
Note Regarding Written Assignments:
Students have the option to rewrite (redo) written assignments at the B- or lower levels. In such cases, the original grade plus the revised grade will be averaged. The averaged grade becomes the final grade for the assignment.
Programs that are evaluated are the result of a long and complicated political process in which winners, losers, and various important constituents exist. Moreover, each group has its own perceptions of what the program is trying to accomplish, independent of its explicit goals. Evaluation results must fit into this ongoing process and must take into account the shifting in alliances and expectations that occur between the onset of the evaluation and its report of findings.
J.E. Veney and A.D. Kaluzny
8
TENTATIVE CALENDAR OF EVENTS Week Date Major Topics Text Assignment 1 9-3-02 Introduction and Orientation; Pre-Test; Course Systematic Design of Instruction; Bibliography Program Evaluation: Definition and purposes. 2 9-10-02 Evaluation Terminology; PE: 1,2,3; Perception about Evaluation; HD: 1,2 Product Criteria; Application Of Criteria to Selected Products. 3 9-17-02* Focusing the Evaluation: Information Gathering-Program, PE: 12,13,14; Purposes, Constraints; Delivering HD: 3,4 an Oral Evaluation Report. 4 9-24-02* Focusing the Evaluation: Objects of Evaluation, PE: 13,14,15, Sources, Purposes, Audiences, Events Influencing the HD: 3,4 Setting; Critiques of Oral Reports: Evaluation and Presentation 5 10-1-02 designing the Evaluation: What Does a Design Include? PE:12,13,14,15; How Is a Design Constructed? How Is a Good HD: 5,6; Design Recognized? PC: 1,2 Involving Audiences in the Evaluation Design: Strategies, Benefits, Risks; Evaluation Designs: Class Discussion; Critique of Oral Reports; 6 10- 8-02* Contrasts between Formative and Summative PE: 16,17; Evaluation: Definition, Purpose, Tone, Form, HD: 5,6; Length, Level of Specificity; Kinds of Questions PC: 3,4 the Evaluator Might Pose; Ways to "Sneak In" Formative Evaluation; Formative Evaluation Exercise: Data, Sources, Phases; Formative Evaluation of an Entire Instructional Sequence; Formative Evaluation of a Slide-Tape Presentation; Critiques of Oral Reports.
9
Week Date Major Topics Text Assignment 7 10-15-02 Formative Evaluation: Setting the Boundaries, PE: 16,17; Preparing a Program Statement, Monitoring PC: 5,6 Program Implementation and Achievement of Program Objectives, Reporting and Conferring with Planners and Staff; Formative Evaluation Contract; Class Exercise; Group Work on Summative Evaluation; Critiques of Oral Reports. 8 10-22-02* Critique of Oral Report; Group Work PE: 1-3; 12-17; on Summative Evaluation; Appendix:
pp. 511-514 9 10-29-02 Selected Evaluation Approaches: PE: 4,5,6,11 Utilitarian Evaluation –Tyler’s Model, Hammond’s Epic Model, Discrepancy Evaluation Model, Stufflebeam’s CIPP, ULCA Model; Pluralist- Institutionist Evaluation – Scriven’s Model, Stake’s Countenance Model, Eisner’s Connoisseurship Model; Adversary Evaluation; Responsive Evaluation; Kirkpatrick’s Levels of Evaluation; Mid Term Exam (Written and Practical) 10 11-5-02* Collecting Information: Quantitative and Qualitative; PE: 17,18 Concerns to Bear in Mind When Designing Information Collection Procedures; Sampling Units, Size, Bias; Commonly Used Instruments; Specifications for an Evaluation Instrument; Class Exercise: Review of Selected Instruments; Critique of Oral Reports. 11 11-12-02 Research Period. Class will not meet in formal session 12 11-19-02 Collecting Information; Attitude Rating Scales-Ordered: PE: 17,18, Scale, Likert Scale, Semantic Differential; Questionnaires: Format, How to Boost Response Rate; Interviews: Kinds, How to Ask Questions (nominal, ordinal, interval); How to Conduct the Interview; Criteria for Selecting Information Collectors; How to Train Information Collectors; Observations: Kinds, Rating Scales (numerical, graphic, checklists; Critique of Oral Reports. Week Date Major Topics Text Assignment
10
13 11-26-02 Day scheduled as a Thursday – no class 14 12-3-02* Analyzing and Interpreting Evaluation Information: PE: 17,18,19 Guidelines for Coding and Analyzing; Some Verification and Cleaning Procedures, Problems to Look for When Verifying Data, Guidelines and Procedures for Analysis; Use of Tables, Figures, Charts; Interpreting Results; Critique of Oral Reports; 15 12-10-02* Managing Evaluation: Necessary Skill Areas; PE: 19, 20 Budget for an Evaluation; Management Plan: Appendix: Timeline, Time Log: Ethics; Reporting Information: pp. 511-51 Insuring Typical Kinds of Reports; Appropriate Style and Structure for Report; Helping Audiences Interpret and Use Reports: Class Exercise: IT Master's Program; Selected Evaluation Approaches; Critique of Oral Reports; Meta Evaluation: Rationale, Purpose, Scope, Strategies; Course Evaluation 16 12-17-02 Final Exam (Written and Practical) Legend: PE = Program Evaluation: Alternative Approaches and Practical Guidelines HD = How to Design a Program Evaluation PC = Planning and Conducting Formative Evaluations * = Due Dates for Assignments (Readings and Written)
11
IT 7150, Fall 2002 Selected Bibliography
IT 7150: EDUCATIONAL PRODUCT AND PROGRAM EVALUATION SELECTED BIBLIOGRAPHY
General References
Abernathy, D.J. “Thinking Outside the Evaluation Box,” Training & Development, Volume
53, Number 2, (February, 1999), 19-23. Alkin, M.C., Debates on Evaluation. Newbury Park, CA: Sage Publications, Inc., 1990 This volume presents a penetrating dialogue among top theoreticians and practitioners about
the nature of evaluation. Anderson, S.B., Ball, S. The Profession and Practice of Program Evaluation. San
Francisco, CA: Jossey-Bass, Inc., 1978. Sections on definition of evaluation; purposes; methods. *Anderson, S.B., Ball, S., Murphy, R.T., and Associates, Encyclopedia of Educational
Evaluation. San Francisco, CA: Jossey-Bass, Inc.,1975. Excellent resource for definitions, concepts, and techniques. Anderson, S.B. and Coles, C.D. (eds.). Exploring Purposes and Dimensions. San Francisco,
CA: Jossey-Bass, Inc., 1978 (PE #1). Analyzes and updates the many facets of program evaluation. Topics include the expanding role of evaluation, needs analysis, program feasibility analysis, evaluating program impact, and evaluation as transaction.
ASTD Trainer's Toolkit: Evaluating the Results of Training, 1992.
With 24 evaluation forms, checklists, guidelines, models, and formulas, this toolkit shows how to document results that help top management understand training's bottom-line impact.
Evaluation Instruments, 1991.
This Toolkit contains 25 evaluation forms used by organizations in fields such as education, banking, electronics, insurance, transportation, and government. Reaction, learning and results evaluation instruments are included.
Berk, R.A. and Rossi, P.H. Thinking About Program Evaluation, Second Edition.
Newbury Park, CA: Sage Publications, Inc., 1998. The book is rich with examples in education, social science, health care, and criminology. Key terms are explained in principle and in practice.
Bingham, R. and Felbinger, C.L. Evaluation in Practice: A Methodological Approach.
New York, NY: Longman, Inc. 1989.
12
IT 7150, Fall 2002 Selected Bibliography
Bloom, B.S., Hastings, J.T., and Madaus, G.G. Handbook on Formative and Summative
Evaluation of Student Learning. New York, NY: McGraw-Hill, Inc., 1971. Bloom, M. and Fischer, J. Evaluating Practice: Guidelines for the Accountable
Professional. Englewood Cliffs, NJ: Prentice-Hall, Inc., 1982. *Borich, G.D. and Jemelka, R.P. "Definitions of Program Evaluation and Their Relation to
Instructional Design", Educational Technology, August, 1981, 31-38. Bryk, A.S. Stakeholder-Based Evaluation. San Francisco, CA: Jossey-Bass, Inc., 1983)
(PE #17). Investigates ways to design and implement evaluations that are responsive to the needs and concerns of people who have a stake in the results - for example, those who administer the program, those who finance it, and those who participate in it. Critically assesses the benefits, drawbacks, objectivity, and fairness of stakeholder-based evaluation.
Cangelosi, J.S. Evaluating Classroom Instruction. New York: Longman, 1991. Carey, L.M. Measuring and Evaluating School Learning. Boston, MA: Allyn and Bacon,
Inc., 1988. Chelimsky, E. and Shadish, W.R., eds. Evaluation for the 21st Century. Thousand Oaks,
CA: Sage Publications, 1997. Chen, H.T. Theory-Driven Evaluations. Newberry Park, CA: Sage Publications, 1996. Conroy, B. Library Staff Development and Continuing Education. Littleton, CO:
Libraries Unlimited, Inc., 1978. Part III: "Evaluating the Learning Program" is particularly relevant for librarians. Converse, J.M. and Presser, S. Survey Questions: Handcrafting the Standardized
Questionnaire. Newbury Park, CA: Sage Publications, Inc., 1987. *Cronbach, L.J. "Course Improvement Through Evaluation", Teachers College Record No.
64, (1963), 672-683. (Appears in Madaus text) "Evaluation of Programs", Encyclopedia of Educational Research, Fifth Edition, Vol. 2.
New York, NY: MacMillan, 1982, 592-611. Fetterman, D.M., Kaftarian, S.J. and Wandersman, A., eds. Empowerment Evaluation.
Thousand Oaks, CA.: Sage Publications, 1996. Fink, A. and Kosecoff, J. How to Conduct Surveys: A Step-by-Step Guide, Second
Edition. Beverly Hills, CA: Sage Publications, 1998.
13
IT 7150, Fall 2002 Selected Bibliography
Examines the nitty-gritty of interview and questionnaire surveys, covers computer-assisted
and interactive surveys. Fowler, Jr, F.J. Survey Research Methods, Second Edition. Newbury Park, CA: Sage
Publications, 1993. Frey, J.H. Survey Research by Telephone, Sage Library of Social Research, Volume 150.
Newbury Park, CA: Sage Publications,1989. Gagne, R.M. (Ed.). Instructional Technology: Foundations. Hillsdale, NJ: Lawrence
Erlbaum Associates, Publishers, 1987. Gronlund, N.E., and Linn, R. L. Measurement and Evaluation in Teaching, Sixth Edition.
New York, NY: MacMillan Publishing Company, 1990. Guba, E.G. and Lincoln, Y.S. Fourth Generation Evaluation, Newbury Park, CA: Sage
Publications, 1989. *Hale, J. “Evaluation: It’s Time to Go Beyond Levels 1, 2, 3, and 4,” Performance
Improvement, Volume 37, Number 2, February, 1998, 30-34. Herman, J.L. (Ed.). Program Evaluation Kit. Second Edition. Newbury Park, CA: Sage
Publications, 1987. Kit includes: Evaluator's Handbook, How to Focus an Evaluation, How to Design a Program Evaluation, How to Use Qualitative Methods in Evaluation, How to Assess Program Implementation, How to Measure Attitudes, How to Measure Performance and Use Tests, How to Analyze Data, How to Communicate Evaluation Findings.
Hopkins, G. “How to Design an Instructor Evaluation,” Training & Development, Volume
53, Number 3 (March, 1999), 51-53. House, E.R. (Ed.). Philosophy of Evaluation. San Francisco, CA: Jossey-Bass, Inc., 1983
(PE #19). Brings together the insights of a number of evaluation specialists to provide new perspectives for understanding the logic and purpose of evaluation. Examines principles underlying the determination of validity, reliability, causality, fairness, and bias. Assesses the relationship of an evaluation's design, intent, and procedures to its results.
House, E. R. Professional Evaluation: Social Impact and Political Consequences.
Newbury Park, CA: Sage Publications, 1993. Kane, R.L. "Evaluating Resistant Programs: Problems for Educational Evaluators Working
Within Accountability Frameworks", Educational Technology, September, 1980, 24-27. Knox, A.B. Evaluation for Continuing Education: A Comprehensive Guide to Success.
San Francisco, CA: 2002.
14
IT 7150, Fall 2002 Selected Bibliography
Lavrakas, P.J. Telephone Survey Methods: Sampling, Selection, and Supervision Second
Edition. Newbury Park, CA: Sage Publications, Inc., Vol. 7, Applied Social Research Methods Series, 1993.
Miles, M.B. and Huberman, A.M. Qualitative Data Analysis, Second Edition. Thousand
Oaks, CA: Sage Publications, 1994. Mohr, L. B. Impact Analysis for Program Evaluation. Second Edition. Newbury Park, CA:
Sage Publications, 1995. *Moseley, J.L.,and Dessinger, J.C. "Criteria for Evaluating Instructional Products and
Programs for Older Adult Learners," Performance and Instruction, Volume 33, Number 3, March, 1994, 39-45.
*Moseley, J.L., Larson, S. "A Job Performance Aid Evaluation Tool", Performance and
Instruction, Volume 31, Number 8, September, 1992, 24-26. Owen, J. Program Evaluation: Forms and Approaches. Thousand Oaks, CA: Sage
Publications, 1999. Parry, S. B. "How to Validate an Assessment Tool," Training, Vol.30, No. 4, April 1993, 37-
39, 41. Patton, M.Q. Qualitative Evaluation and Research Methods, Second Edition, Newbury
Park, CA: Sage Publications, Inc., 1990. Patton, M.Q. Utilization-Focused Evaluation, Third Edition. Thousand Oaks, CA.: Sage
Publications, 1996. Piskurich, G.M. “Re-Evaluating Evaluation,” Performance Improvement, Volume 36,
November 8, September, 1997, 16-17. *Popham, W.J. Evaluating Instruction. NJ: Prentice-Hall, Inc., 1973, 6-21. First section on
current conceptions of educational evaluation. Popham, W.J. Educational Evaluation, Second Edition, NJ: Prentice-Hall, 1988. Popham, W.J. Criterion-Referenced Measurement. Englewood Cliffs, NJ: Educational
Technology Publications, 1971. Posavac, E.J. and Carey, R.G. Program Evaluation: Methods and Case Studies, Fourth
Edition. NJ: Prentice-Hall, Inc., 1992. Part I: An Overview of Program Evaluation including Ethical Standards. Rohrer-Murphy, L., Moller, L. And Benscoter, B. “A Performance Technology Approach to
Improving Evaluation,” Performance Improvement, Volume 36, Number 8, September, 1997, 10-15.
15
IT 7150, Fall 2002 Selected Bibliography
Rossi, P.H. (Ed.). Standards for Evaluation Practice. San Francisco, CA: Jossey-Bass, Inc., 1982 (PE #15).
Outlines and critically analyzes the Evaluation Research Society's adopted standards and considers how they can be adapted to program evaluation in social service programs, the health field, educational institutions, industry, law enforcement agencies, and other areas.
Rossi, P.H., Freeman, H.E., and Lipsey, M.W. Evaluation: A Systematic Approach, Sixth
Edition. Newbury Park, CA: Sage Publications, 1998. Russ-Eft, D. and Preskill, H. Evaluation in Organizations: A Systematic Approach to
Enhancing Learning, Performance, and Change. Cambridge, MA: Perseus Publishing, 2001.
*Schalock, R.L. Outcome-Based Education. New York, NY: Plenum Publishing
Corporation, 1995. Schuman, H. and Presser, S. Questions and Answers in Attitude Surveys: Experiments on
Question Form, Wording, and Content: Thousand Oaks, CA: Sage Publications, 1996. Schwandt, T.A. and Halpern, E.S. Linking Auditing and Meta Evaluation, Applied Social
Research Methods, Volume II. Newbury, Park, CA., 1988. *Scriven, M. Evaluation Thesaurus, Fourth Edition. Newbury Park, CA: Sage
Publications, Inc. 1991. Written by one of the leaders in evaluation, this thesaurus of nearly 2000 entries provides readers with quick analyses of the major concepts, positions, acronyms, processes, techniques, and checklists in the field of evaluation.
*Scriven, M. "The Methodology of Evaluation", in Perspectives of Curriculum Evaluation:
American Educational Research Association Monograph Series on Curriculum Evaluation No. 1. Chicago, IL: Rand McNally, 1967.
*Scriven, M. "Pros and Cons About Goal-Free Evaluation", Evaluation Comment 3, No. 4
(1972), 1-4. Sechrest, L. (Ed.). Training Program Evaluators. San Francisco, CA: Jossey-Bass, Inc.,
1980 (PE #8). Discusses professional training in research design, quantitative analysis, real-world problem solving, secondary analysis, and interdisciplinary research. Describes--using examples from the fields of health, welfare, law enforcement, and education--the skills a client has a right to expect and ways to acquire them.
Shadish, W.R., Cook, T.D., Leviton, and Laura C. Foundations of Program Evaluation.
Newbury Park, CA. Sage Publications, Inc., 1991.
Shaw, I. Qualitative Evaluation. Thousand Oak, CA: Sage Publications, 1999. Silverman, D. Interpreting Qualitative Data: Methods for Analyzing Talk, Text, and
16
IT 7150, Fall 2002 Selected Bibliography
Interaction. Thousand Oaks, CA: Sage Publications, 1993. Spitzer, D.R. “Embracing Evaluation”, Training: The Human Side of Business, Volume 36,
Number 6, (June, 1999), 42-47. St. Pierre, R.G. (Ed.). Management and Organization of Program Evaluation. San Francisco, CA:
Jossey-Bass, Inc., 1983 (PE #18). Offers organizational and management techniques for improving program evaluation units and individual evaluation projects in diverse public and private settings. Includes ideas for smooth functioning in such relationships with program users.
Stake, R.E. Evaluating Educational Programs. Urbana-Champaign: University of Illinois Center
for Instructional Research and Curriculum Evaluation, 1976. *Stufflebeam, D.L., Foley, W.J., Gephart, W.J., Guba, E.G., Hammond, L.R., Marriman, H.O., and
Provus, M.M. Educational Evaluation and Decision Making. Itasca, IL: F.E. Peacock Publishers, 1971. Chapter 7 - CIPP Model.
Sudman, S. and Bradburn, N.M. Asking Questions: A Practical Guide to Questionnaire Design.
San Francisco, CA: Jossey-Bass, Inc., 1985. Tiemann, P.W. "Toward Accountability: Learner Verification as the Next Step", NSPI Journal 13,
No. 10, (1974), 3-7. Torres, R. Preskill, H.S. and Piontek, M.E. Evaluation Strategies for Communicating and
Reporting. Thousand Oaks, CA.: Sage Publications, 1996. Tuckman, B.W. Evaluating Instructional Programs, Second Edition. Boston, MA: Allyn &
Bacon, 1985. Chapter 1 - Overview of Instructional Evaluation Chapter 9 & 10 - Formative and Summative Evaluation Appendix A: Case Studies 1 & 2 (Examples of Formative and Summative).
Wholey, J. S., Hatry, H.P. & Newcomer, K.E., eds. Handbook of Practical Program Evaluation.
San Francisco, CA: Jossey-Bass Publishers, 1994. Worthen, B.R. and White, K.R. Evaluating Educational and Social Programs: Guidelines for
Proposal Review, Onsite Evaluation, Evaluation Contracts, and Technical Assistance. Hingham, MA: Kluwer Academic Publishers, 1987.
Cost Effectiveness Levin, H.M. Cost-Effectiveness, New Perspectives in Evaluation, Volume 4. Thousand Oaks, CA.:
Sage Publications, 1983. Nas, T.F. Cost-Benefit Analysis. Thousand Oaks, CA.: Sage Publications, 1996. Phillips, J.J. Return on Investment in Training and Performance Improvement Programs.
Houston, Texas: Gulf Publishing Company, 1997. 17
IT 7150, Fall 2002 Selected Bibliography
Yates, B.T. Analyzing Costs, Procedures, Processes, and Outcomes in Human Services, Applied
Social Research Methods, Volume 42. Thousand Oaks, CA.: Sage Publications, l996.
Evaluation Models *Kandaswamy, S. "Evaluation of Instructional Materials: A Synthesis of Models and Methods",
Educational Technology, June, 1980, 19-26. Madaus, G.F., Scriven, M. and Stufflebeam, D.L. (eds.), Evaluation Models: Viewpoints on
Educational and Human Services Evaluation. Hingham, MA: Kluwer Academic Publishers, 1983.
*Moseley, J.L. and Dessinger, J.C. “The Dessinger-Moseley Evaluation Model: A Comprehensive
Approach to Training Evaluation,” in Dean, P.J. and Ripley, D.E., eds. Performance Improvement Interventions: Instructional Design and Training, Volume Two. Washington, D.C.: The International Society for Performance Improvement, 1998, 233-260.
Pace, C.R. & Friedlander, J. "Approaches to Evaluation: Models and Perspectives", G.R. Hanson
(Ed.), Evaluating Program Effectiveness, San Francisco, CA: Jossey-Bass, Inc., 1978. Provus, M. Discrepancy Evaluation for Educational Program Improvement and Assessment.
Berkley, CA: McCutchan Publishing Corp., 1971. Discrepancy Evaluation Model. Willis, B. "A Design Taxonomy Utilizing Ten Major Evaluation Strategies", "International Journal
of Instructional Media, Vol. 6, No. 4, (1978-79), 369-388. Wolf, R.L. “Trial by Jury: A New Evaluation Method. Part I: The Process”, and Arnstein, G., “Part II:
The Outcome”, Phi Delta Kappan 57, Vol. 57, No. 3 (November, 1975), 185-190. *Worthen, B.R. & Sanders, J.R. Educational Evaluation: Alternative Approaches and Practical
Guidelines. New York, NY: Longman, Inc., 1987. Worthen, B.R., Sanders, J.R. and Fitzpatrick, J.L. Program Evaluation: Alternative Approaches
and Practical Guidelines, Second Edition. New York, N.Y.: Longman, 1997.
Formative Evaluation/Summative Evaluation/Confirmative Evaluation Baker, E.L. and Alkin, M.C. "Formative Evaluation of Instructional Development", AV
Communication Review 21, No. 4, (Winter, 1973), 389-418. Cambre, M.A. "Historical Overview of Formative Evaluation of Instructional Media Products",
ECTJ 29, No. 1, (Spring, 1981), 3-25.
18
IT 7150, Fall 2002 Selected Bibliography
*Carey, L., and Dick, W. "Summative Evaluation", in L.J. Briggs, et. al., (eds.), Instructional Design: Principles and Applications, Second Edition. Englewood Cliffs, NJ: Educational Technology Publications, 1991.
Chapter 11 - Purposes, phases of summative evaluation. Chinien, C. "Paradigms of Inquiry for Research in Formative Evaluation", Performance and
Instruction, Vol. 29, No. 9, October, 1990. *Cross, P.K. "The Adventures of Education in Wonderland: Implementing Education Reform", Phi
Delta Kappan, Vol. 68, No. 7, (March, 1987), 496-502. Deshler, D. (Ed.). Evaluation for Program Improvement. San Francisco, CA: Jossey-Bass, Inc.,
1984 (CE #24). *Dick, W., and Carey, L. "Formative Evaluation", in L.J. Briggs, et, al.(eds), Instructional Design:
Principles and Applications, Second Edition. Englewood Cliffs, NJ: Educational Technology Publications, 1991.
Chapter 10 - Purposes, phases of formative evaluation. Dick, W., Carey, L. and Carey, J.O. (2001). The Systematic Design of Instruction, Fifth edition.
New York, NY: Longman. Chapters 10-12. Formative and Summative Evaluation. *Dick, W. "Formative Evaluation in Instructional Development", Journal of Instructional
Development, Vol. 3, No. 3, (Spring, 1980), 3-6. Flagg, B.N. Formative Evaluation for Educational Technologies, Hillsdale. NJ: Lawrence
Erlbaum Associates, Publishers, 1990. Geis, G.L. "Formative Evaluation: Developmental Testing and Expert Review", Performance and
Instruction, Vol. 26, No. 4,(May/June, 1987), 1-8. Gagne, R.M., Briggs, L.J., and Wager, W.W. Principles of Instructional Design, Fourth Edition.
Fort Worth, TX: Harcourt Brace Jovanovich College Publishers, 1992. Chapter 16 - Evaluating Instruction.
Golas, K.C. "The Formative Evaluation of Computer-Assisted Instruction", Educational
Technology, January, 1983, 26-28. Hellenbrandt, J. and Russell, J. D. "Confirmative Evaluation of Instructional and Materials and
Learners, "Performance & Instruction, Volume 32, Number 6, July, 1993, 22-27. Ilgen-Lieth, M. and Hazen, M. "A Procedure for Formative Evaluation of Instructional Software by
Instructional Design Consultants, Part 1- Overview", Performance and Instruction, Vol. 26, No. 5 (July, 1987), 16-19.
19
IT 7150, Fall 2002 Selected Bibliography
Ilgen-Lieth, M. and Hazen, M. "A Procedure for Formative Evaluation of Instructional Software by
Instructional Design Consultants, Part 2- The Preliminary Evaluation", Performance and Instruction, Vol. 26, No. 6 (August, 1987), 30-34.
Ilgen-Lieth, M. and Hazen, M. "A Procedure for Formative Evaluation of Instructional Software by
Instructional Design Consultants, Part 3-Group Evaluation, Feedback, and Conclusion", Performance and Instruction, Vol. 26, No. 7 (September, 1987), 46-48.
*Moseley, J.L. and Solomon, D.L. “Confirmative Evaluation: A New Paradigm for Continuous
Improvement,” Performance Improvement, Vol. 36, #5, May/June, 1997, 12-16. Newman, D. and Brown, R.D. Applied Ethics for Program Evaluation: Thousand Oaks, CA: Sage
Publications, 1996. Pearlstein, G. "Gathering Formative Evaluation Data Daily", Performance and Instruction, Vol. 27,
No. 10 (November/December, 1988), 49-50. Sanders, J. and Cunningham, D. "A Structure for Formative Evaluation in Product Development",
Review of Educational Research, 43, (1973),276-278. *Tessmer, M. Planning and Conducting Formative Evaluations. London: Kogan Page, 1993. *Tessmer, M. “Formative Evaluation Alternatives,” Performance Improvement Quarterly, 7(1),
1994, 3-18. *Thiagarajan, S. “Formative Evaluation in Performance Technology,” Performance Improvement
Quarterly, 4(2), 1991, 22-24.
Evaluation Standards *The Joint Committee on Standards for Educational Evaluation, Evaluation Standards: How to
Assess Evaluations of Educational Programs, Second Edition. Thousand Oaks, CA: Sage Publications, 1994.
Program Evaluation in Health Care Aday, L.A. Designing and Conducting Health Surveys. Second Edition. San Francisco,
CA: Jossey-Bass Publishers, 1996. Bell, D.F. and Bell, D.L., "Effective Evaluations", Nurse Educator, November-December,
1979, 6-15. Borich, G.D. (Ed.). Evaluating Educational Programs and Products. Englewood Cliffs,
NJ: Educational Technology Publications, 1974. Evaluator Roles, Evaluation Models, Techniques.
20
IT 7150, Fall 2002 Selected Bibliography
Corcoran, K. and Fischer, J. Measures for Clinical Practice: A Source Book. New York,
NY: The Free Press, 1987. *Clark, T., Goodwin, M., Mariani, M., Marshall, M., and Moore, S. "Curriculum Evaluation:
An Application of Stufflebeam's Model in a Baccalaureate School of Nursing", Journal of Nursing Education 22, No. 2 (February, 1983), 54-58.
Dignan, M.B., Steckler, A.B. and Goodman, R.M. Measurement and Evaluation of Health
Education, Third Edition. Springfield, IL: Charles C. Thomas, 1995. *Ediger, J., Snyder, M. and Corcoran, S. "Selecting a Model for Use in Curriculum
Evaluation", Journal of Nursing Education 22, No.5 (May, 1983), 195-199. Fink, A. Evaluation Fundamentals: Guiding Health Programs, Research, and Policies.
Newbury Park, CA: Sage Publications, Inc., 1993. Green, L.W. and Lewis, F.M. Measurement and Evaluation in Health Education and
Health Promotion. Palo Alto, CA: Mayfield Publishing Company, 1986. Hale, C.T., Arnold, F. and Travis, M M.T. Planning and Evaluating Health Programs: A
Primer. New York, NY: MacMillan, 1994. *Herbener, D.J., and Watson, J.E. "Models for Evaluating Nursing Education Programs",
Nursing Outlook 40, Number 1 (January/February, 1992), 27-32. Lorig, K., Stewart, A. and Ritter, P. et.al. Outcome Measures for Health Education and
Other Health Care Interventions. Thousand Oaks, CA: Sage Publications, 1996. Luckey, R.H., Sweet, J. And Knupp, B. “The Internet: An Essential Tool for College Health
Networking,” Journal of American College Health, 45, 6-10. McDermott, R.J. and Sarvela, P.D. Health Education Evaluation and Measurement: A
Practitioner’s Perspective, Second Edition. New York, NY: WCB/McGraw-Hill, 1999. McDowell, I. and Newell, C. Measuring Health: A Guide to Rating Scales and
Questionnaires. New York: Oxford University Press, 1987. McKenzie, J.F. and Jurs, J.L. Planning, Implementing and Evaluating Health Promotion
Programs: A Primer. New York: MacMillan, 1993. Meredith, J. "Program Evaluation Techniques in the Health Services", American, Journal of Public
Health, 66, No. 11 (November, 1976), 1069-1073. Muraskin, L.D. Understanding Evaluation: The Way to Better Prevention Programs.
Washington, D.C.: U.S. Department of Education, 1993. Opatz, J.P., ed. Economic Impact of Worksite Health Promotion. Champaign, IL: Human
Kinetics, 1994.
21
IT 7150, Fall 2002 Selected Bibliography
Sarvela, P.D. and McDermott, R.J. Health Education Evaluation and Measurement: A
Practitioner's Perspective. Madison, WI: WCB Brown & Benchmark Publishers, 1993. Schulberg, H.C. and Baker, F. (eds). Program Evaluation in the Health Science Fields, Volume
II. New York, NY: Human Sciences Press, 1979. Streiner, D.L. and Norman, G.R. Health Measurement Scales: A Practical Guide to Their
Development and Use, Second Edition. New York, NY: Oxford Medical Publication, 1995. Warren, K.S., Mosteller, F. New York Academy of Sciences, and L.W. Frohlich Charitable Trust,
eds. Doing More Good Than Harm: The Evaluation of Health Care Interventions. New York, NY: New York Academy of Sciences, 1993.
Windsor, R.A., Baranowski, T., et-al. Evaluation of Health Promotion and Education Programs.
Palo Alto, CA: Mayfield Publishing Company, 1984. Woodward, C.A., Chambers, L.W. and Smith, K.D. Guide to Improved Data Collection in Health
and Health Care Surveys (Personal Interview, Telephone Interview, Mail Surveys). Ottawa: Canadian Health Association, 1982.
Evaluation in Training Programs Abella, K.T. Building Successful Training Programs: A Step by Step Guide. Reading, MA:
Addison-Wesley Publishing Company, Inc., 1986. *Awotua-Efebo, E.B. "Guidelines for Cost-Effective Training Product Development", Educational
Technology, March, 1984, 42-44. *Brandeburg, D.C. "Training Evaluation: What's the Current Status", Training and Development
Journal, August, 1982. Brinkerhoff, R.O. Achieving Results from Training: How to Evaluate Human Resource
Development to Strengthen Programs and Increase Impact. San Francisco, CA: Jossey-Bass, Inc., 1987.
Brinkerhoff, R.O. "Making Evaluation More Useful", Training and Development Journal,
December, 1981. Clement, R.W. and Aranda, E.K. "Evaluating Management Training: A Contingency Approach",
Training and Development Journal, August, 1982. Dahmer, B. “The Internet and You,” Training and Development, Volume 49, Number 6, June, 1995,
65-70. Dixon, N.M. Evaluation: A Tool for Improving HRD Quality. San Diego, CA: University
Associates, 1990.
22
IT 7150, Fall 2002 Selected Bibliography
*Dunn, S. and T.K. "Surpassing the 'Smile Sheet' Approach to Evaluation", Training 22, No. 4, (April, 1985), 65-68, 71.
Fetterall, E. "Don't Sneer at Smile Sheets", Training News, Vol. VIII, No. 3, (November, 1986), 12-
14. Fitz-enz, J. How to Measure Human Resources Management. New York, NY: McGraw-Hill
Publishing Company, 1984. Fitz-enz, J. "Yes You Can Weigh Training's Value," Training, Vol. 31, No. 7, (July, 1994), 54-58. Forman, D.C. "Evaluation of Training: Present and Future", Educational Technology, October,
1980. Hahne, C.E. "How to Measure Results of Sales Training", Training and Development Journal,
November, 1977. Hamblin, A.C. Evaluation and Control of Training. New York, NY: McGraw-Hill, 1974. Kaufman, R. Thiagarajan, S. and MacGillis, P., eds. The Guidebook for Performance Improvement,
San Francisco, CA: Jossey-Bass Inc., Publishers, 1997. (Section VI) *Kirkpatrick, D.L., ed. Evaluating Training Programs. Washington, DC: American Society for
Training and Development, 1975. *Kirkpatrick, D.L. Evaluating Training Programs: The Four Levels. San Francisco: Berrett-
Koehler Publishers, 1994. Kraut, A.I., ed. Organizational Surveys: Tools for Assessment and Change. San Francisco, CA:
Jossey-Bass Publishers, 1996. Laird, D. Approaches to Training and Development, Rev. 2nd Ed. Reading, MA: Addison-
Wesley Publishing Co., 1985. Marshall, V. and Schriver, R. “Using Evaluation to Improve Performance,” Technical & Skills
Training, January, 1994. May, L.S., Moore, C.A., and Zammit, S.J., eds. Evaluating Business and Industry Training.
Hingham, MA: Kluwer Academic Publishers, 1987. Phillips, Jack J. Handbook of Training Evaluation and Measurement Methods. Third Edition.
Houston, TX: Gulf Publishing Company, 1997. Pollack, C., and Masters, R. “Using Internet Technologies to Enhance Training,” Performance and
Instruction, Volume 36, Number 2, February, 1997, 28-31. Powers, E. "Step-by-Step to the Design of Sales Training Evaluation", NSPI Journal, March, 1983,
23-25.
23
IT 7150, Fall 2002 Selected Bibliography
*Salinger, R.D. and Deming, B.S. "Practical Strategies for Evaluating Training", Training and Development Journal, August, 1982.
*Simpson, D.T. "Are You an Informed Consumer of Training?", Training and Development
Journal, December, 1981. Warshauer, S. Inside Training and Development: Creating Effective Programs, San Diego, CA:
University Associates, Inc., 1988. Wulf, K. “Training Via the Internet,” Training and Development, Volume 50, Number 5, May,
1996, 50-55. Zenger, J.H. and Hargis, K. "Assessing Training Results: It's Time to Take the Plunge!", Training
and Development Journal, January, 1982.
Consultation Skills Connor, R.A. and Davidson, J.P. Marketing Your Consulting and Professional Services.
Somerset, NJ: Wiley Professional Books, 1985. Gallessich, J. The Profession and Practice of Consultation. San Francisco, CA: Jossey-Bass, Inc.,
1983. Holtz, H., How to Succeed as an Independent Consultant. Somerset, NJ: Wiley Professional
Books, 1983. Holtz, H., The Consultant's Guide to Proposal Writing. New York, NY: John Wiley & Sons, Inc.,
1986. Lippitt, G. & Lippitt, R. The Consulting Process in Action, 2nd ed. LaJolla, CA: University
Associates, Inc., 1987. Parsons, R.D. and Meyers, J. Developing Consultation Skills. San Francisco, CA: Jossey-Bass,
Inc., 1984. Vaux, A., Stockdale, M.S. and Schwerin, M.J., eds. Independent Consulting in Evaluation.
Newbury Park, CA: Sage Publications, 1992.
Product Evaluation *McAlpine, L. and Weston, C. “The Attributes of Instructional Materials,” Performance
Improvement Quarterly, 7(1), 1994, 19-30.
24
IT 7150, Fall 2002 Selected Bibliography
*Moseley, J.L. “Criteria for Evaluating Instructional Products”, ERIC: Clearinghouse on Tests,
Measurement, and Evaluation, #ED281870, September 24, 1987, 1-7. Picciano, A.G. (1998). “Appendix C: Instructional Software Evaluation Factors” in Educational
Leadership and Planning for Technology, Second Edition. Upper Saddle River, New Jersey: Prentice-Hill, Inc., pp. 304-308.
Reiser, R.A. & Kegelmann, H.W., “Evaluating Instructional Software: A Review and Critique of
Current Methods,” ETR & D., Vol 42, No. 3, 1994, 63-69. Rice, M. and Valdivia, L. “A Simple Guide for Design, Use, and Evaluation of Educational
Materials,” Health Education Quarterly, Vol. 18(1), Spring, 1991, 79-85. *Scriven, M., “Product Evaluation - The State of The Art,” Evaluation Practice. Vol. 15, No. 1,
1994, 45-62. *Scriven, M., “Key Evaluation Checklist,” in Evaluation Thesaurus, Fourth Edition. Beverly Hills,
CA: Sage Publications, 1991, 204-211. Stockard, R.R., Murray., et. al. “A Practical Model for Evaluating New Products and
Technology.”Aspen’s Advisor for Nurse Executives, Vol. 9, No. 5, February, 1994, 4-7.
Electronic Forms of Data Collection Brousseau, M. “Kiosks Let Dealers Touch People They Wouldn’t Ordinarily Reach,” Automotive
News, January 27, 1997, 33. Carr, H.H. “Is Using Computer-Based Questionnaires Better Than Using Paper?”, Journal of
Systems Management, Vol. 42, September, 1991, 18, 37. Fulop, M.P., Loop-Bartick, K. and Rossett, A. “Using the World Wide Web to Conduct a Needs
Assessment,” Performance Improvement, Volume 36, Number 6, July, 1997, 22-27. Gaddis, S.E. “How to Design Online Surveys,” Training & Development, Volume 52, Number 6,
June, 1998, 67-71. Kimball, L. “Easier Evaluation with Web-Based Tools,” Training and Development, Volume 52,
Number 4, April, 1998, 54-55. Mac Elroy, R. and Geissler, B. “Interactive Surveys Can Be More Fun Than the Traditional,”
Marketing News, October 24, 1994, 4-5. Martin, L. “An Easy Way to Do Surveys,” HR Magazine, Vol. 37, November, 1992, 33-34, 36. Meyer, G. “Assessing Diversity and Culture on the PC,” HR Magazine, Vol. 36, April, 1991, 25-28.
25
IT 7150, Fall 2002 Selected Bibliography
Synodinos, N.E. and Brennan, J.M. “Evaluating Micro Computer Interactive Survey Software.” Journal of Business and Psychology, Vol. 4, No. 4 (Summer, 1990), 483-492.
Thach, L. “Using E-Mail to Conduct Survey Research,” Educational Technology, 35, March-April,
1995, 27-31. Wilkinson, G.L., Bennett, L.T. and Oliver. K.M. “Evaluation Criteria and Indicators of Quality for
Internet Resources,” Educational Technology, Vol. XXXVII, No. 3 (May-June, 1997), 52-58. Wiss, J. “Meet MAX: Computerized Survey Taker,” Marketing News, Vol. 23, May 22, 1989, 16. Note: These references will give you both an historical and a current view of the evaluation process.
References selected on the bibliography represent theoretical and practical viewpoints. In addition, the following volumes address evaluation issues directly and indirectly.
*Fink, A., ed. The Survey Kit. Newbury Park, CA: Sage Publications, 1995
Volume 1: A Survey Handbook Volume 2: How To Ask Survey Questions Volume 3: How To Conduct Self-Administered and Mail Surveys Volume 4: How To Conduct Interviews by Telephone and in Person Volume 5: How To Design Surveys Volume 6: How To Sample In Surveys Volume 7: How To Measure Survey Reliability and Validity Volume 8: How To Analyze Survey Data Volume 9: How To Report On Surveys
*Handbook of Research for Educational Communications and Technology. *International Encyclopedia of Adult Education and Training, Second Edition. *International Encyclopedia of Educational Technology, Second Edition. *The ASTD Training and Development Handbook, Fourth Edition. *Suggested Readings for All Students Revised, Fall, 2002
26
27
INSTRUCTIONAL TECHNOLOGY 7150: EDUCATIONAL PRODUCT AND PROGRAM EVALUATION Oral Report Evaluation Name: Overall Evaluation: RATING SCALE: 5=EXCELLENT 4=VERY GOOD 3=GOOD 2=FAIR 1=POOR
CONTENT
5
4
3
2
1
Introduction (Clear purpose, main topics identified) Body (Appropriateness, theory, understanding, organization) Conclusion (Major points summarized, call to action)
PRESENTATION
Crutches (Ahs, ums, you knows, and, other) Body Control (Gestures, eye contact, controlled movement) Voice (Volume, modulation, enunciation) Dynamics (Comfort, pace, extemporaneous language, audience participation) Structure (Vocabulary, grammar) Visual Aids (Appropriateness, readability, appearance, not “busy”) Ability to Field Questions
Begin 4. Time End
DISK#1A: WSU-COE\IT7150 Fall 2002