Post on 09-Aug-2020
Running head: DEVELOPMENT OF RUBRICS 1
Nurse Educator Proposal for the Development of Rubrics in the Clinical Setting
Erin Kibbey
Ferris State University
DEVELOPMENT OF RUBRICS 2
Abstract
The importance of measuring competency in the clinical setting is an important job for the nurse
educator, although a universal way of doing so has yet to be agreed upon. While rubrics have
traditionally been used in academia, they are much less commonly used for educating in the
hospital for future and new nurses. The creation and implementation of rubrics for use in the
clinical setting could provide a uniform method for documenting and evaluating competency.
This paper describes a proposed project involving the creation and implementation of rubrics for
use in the critical care internship program by preceptors and educators at a hospital in Traverse
City, Michigan. This paper further describes the project setting, preceptor, goals and objectives,
timeline of activities, and an evaluation tool. The project proposes to take place beginning
January, 13, 2014 and ending May 2, 2014.
Keywords: rubrics, clinical setting, competency, nurse educator
DEVELOPMENT OF RUBRICS 3
Nurse Educator Proposal for the Development of Rubrics in the Clinical Setting
As a future nurse educator, the ability to use assessment and evaluation techniques is an
important competency identified by the National League for Nursing (NLN, 2012). Developing
evidence-based assessment practices and implementing evaluation strategies appropriate to the
learner are also key to fulfillment of this competency (NLN, 2012). In order to help develop this
competency further, the proposed project, described throughout the rest of this paper, will take
place from January 13, 2014 to May 2, 2014. The proposed project involves the development
and implementation of rubrics for evaluating competency of nurses within the clinical setting.
The importance of documenting and measuring competency in the clinical setting cannot
be underestimated. However, the best way to accomplish this task has not been widely agreed
upon or established (Fahy et al., 2011). The degree of performance in the clinical setting can be
judged both qualitatively and quantitatively through the convenient use of a rubric form (Bonnel,
2012). Another issue associated with measuring competency in the clinical setting is that
measurement can be inconsistent and subjective amongst various evaluators (Bonnel, 2012).
Therefore, the purpose of the scholarly project described throughout the following proposal is to
establish a uniform method for documenting and evaluating clinical competence at a hospital in
northern Michigan, Munson Medical Center (MMC), through the development of rubrics.
Not only does the creation and implementation of rubrics have clinical importance, but
working on the proposed project will also help me gain experience in the assessment and
evaluation competency as defined by the NLN (2012). The purpose of this paper is to fully
describe the proposed project including setting, goals and objectives, preceptor, timeline, and
evaluation.
DEVELOPMENT OF RUBRICS 4
Setting
Although rubrics have been widely used in traditional academic settings, it has been used
less frequently as a tool in the clinical setting (Frentsos, 2013). Therefore, the proposed project
will take place in the hospital setting and be created for use by preceptors and interns in the
critical care internship program. Specifically, it is proposed that this project will occur at MMC,
in Traverse City. It is the largest hospital in northern Michigan and is comprised of 391 inpatient
beds (MMC, 2013).
The critical care internship is a five month program designed for nurses (either new
graduate nurses with some experience or newer nurses from non-critical care units) to provide
interns with the knowledge and skills to care for critical care patients upon completion of the
program (MMC, 2013). Candidates for the program require two years or less experience, letters
of recommendation, and a letter of intention. Applications for the spring 2014 class are currently
under review and interviews are set to take place in January. The spring internship class
officially begins in March.
The critical care interns spend the first few weeks rotating through the various critical
care units at MMC. Participating units in this program include: the emergency department (ED),
intensive care unit (ICU), cardiothoracic unit (A2), and the adult cardiac critically ill and cardiac
interventional unit (A3). The ED is a 43 bed unit and accredited as a level two trauma center.
The ICU is comprised of medical-surgical, trauma, and neurological patients. It contains 20 beds
including a progressive care area that also serves a variety of patients with multisystem
intricacies. Located in MMC’s Heart Center is A2, a 30 bed unit that provides care to
cardiothoracic surgical patients immediately post-operative until patient discharge. Lastly, A3 is
also located in MMC’s Heart Center and like A2 is a 30 bed unit that is acuity adaptable. The
DEVELOPMENT OF RUBRICS 5
ICU, A2, and A3 train nurses in the internship program to manage both critical and intermediate
care patients.
After the interns have been to the various units they are then assigned to a specific unit
for the remainder of the internship. As they continue in the rest of the internship program they
are oriented to their assigned unit by various preceptors that work on the unit. There is clinical
as well as didactic preparation included as part of the program (MMC, 2013). Moreover, there is
computer based education, skills labs, cases studies, simulation, and various critical thinking
experiences embedded into the program. The interns are overseen throughout the program by
the internship coordinator, Patti Hresko. Ms. Hresko is a master’s prepared nurse and will also
aid as my preceptor for assistance in the proposed project.
Identification of Preceptor
Preceptors assist not only as a teacher, but also as a coach (Ulrich, 2012). Preceptors
need to understand both the science of teaching as well as the art of teaching (Ulrich, 2012). I
believe I have selected a preceptor that is skilled in both of these areas. In addition, she is very
experienced and will be a good facilitator for my learning. The preceptor identified for the
proposed project, as previously mentioned, is Ms. Patti Hresko.
Ms. Hresko is the resource clinician and educator of MMC’s critical care internship
program. She is a master’s prepared nurse, although her degree is as a family nurse practitioner,
and she has been the coordinator and facilitator of the internship program for several years. She
also teaches a role transition class for new nurses that is required after about six months of
working at Munson. In addition, Ms. Hresko is certified as a critical care registered nurse
(CCRN) by the American Association of Critical-Care Nurses and continues to maintain her skill
DEVELOPMENT OF RUBRICS 6
set and knowledge by working in the intensive care unit at MMC once a week. She has been a
nurse for over ten years.
Since Ms. Hresko has experience working with various preceptors, unit managers, unit
educators, agency management, and the critical care interns for the last several years she is a key
stakeholder in the proposed project and will serve as a great facilitator. Her knowledge and
experience in implementing change within the critical care internship program will be
instrumental in helping guide me through the rubric implementation process. In addition, her
input and enthusiasm for the project will be valuable to helping the project succeed and
potentially carry on past the duration of the proposed project. Ms. Hresko can be reached by
telephone at 231-392-0193 or by email at phresko@mhc.net. Letters from both an MMC
representative, as well as Patti Hresko, stating approval of the proposed project are included in
Appendix A.
Goals, Objectives, and Activities
Background
Since there is no uniform measurement of clinical competence used throughout MMC, all
the interns in the critical care internship program are currently being evaluated using different
assessment practices, which has led to some difficulties in providing feedback between
preceptors, interns, and the internship coordinator (P. Hresko, personal communication,
September 5, 2012). The creation and use of rating scales or rubrics is supported by research in a
wide range of academic subjects including economics, writing, speech, dentistry, and
chiropractic medicine (McGoldrick & Peterson, 2013; O’Donnell, Oakley, Haney, O’Neill, &
Taylor, 2011; Rezaei & Lovorn, 2010; Saxton, Belanger, & Becker, 2012; Xiaohua & Canty,
2012). Although rubrics have been embraced throughout academia, nursing staff development
DEVELOPMENT OF RUBRICS 7
educators do not use rubrics consistently, instead they typically use the nursing skills checklist
(Frentsos, 2013).
Rubrics are defined as “scaled tools with levels of achievement and clearly defined
criteria placed in a grid. Rubrics establish clear rules for evaluation and define the criteria for
performance” (O’Donnell et al., 2011, p. 1163). Rubrics typically consist of three main parts
including a scale of the levels of performance, dimensions or criteria for evaluation, and a
description of the dimensions (O’Donnell et al., 2011). Rubrics are either holistic or analytical.
According to Kirkpatrick and DeWitt (2012), holistic rubrics are more globally scored and thus
typically focus on overall performance. Analytic rubrics, on the other hand, examine each
significant characteristic of performance. Depending on the type used, rubrics can provide
summative or formative evaluation of learning. Typically analytic rubrics are chosen for
formative evaluation and holistic rubrics are better suited for summative evaluation.
Despite the lack of consistent use of rubrics in clinical nursing education, rubrics have
many benefits. Rubrics support adult learning principles, provide competency documentation
required by regulatory agencies, can improve quality of care, allow more discrimination in
judging behaviors, and can increase knowledge gain through its use (Bonnel, 2012; Frentsos,
2013). It has also been noted by Bonnel (2012), that rating scales offer more detail about the
quality of a performance compared to nursing skill checklists. Further benefits of rubrics include
more specific guidance for graders, thus promoting reliability between graders, timely and
detailed feedback without superfluous writing, an opportunity to self-assess, and promotion of
clear communication for completion of skills using best practice (Bonnel, 2012; O’Donnell et al.,
2011; Walvoord & Anderson, 2010).
DEVELOPMENT OF RUBRICS 8
Rubric Creation
There are four main steps to creating a rubric according to Stevens and Levi (2005). The
first step involves reflection. The second step is listing and defining the specific learning
objectives. The third step consists of grouping similar components. The final step to rubric
creation is applying dimensions and descriptions.
Since the purpose of this project is to establish a uniform method for documenting and
measuring clinical competency through the use of rubrics, the first main goal of this project as
stated in Appendix B is to create the rubrics used for the critical care interns at MMC. In order
to achieve this goal several objectives were identified. The first objective related to this goal is
to obtain literature and references about rubrics. This objective was created as a first step to
understanding the creation of rubrics and how they may be best used in the specified setting.
Activities used to support this object will include a search of various databases, compilation of a
reference list, and a review of the literature.
The second objective is to perform a needs assessment. This coincides with Stevens and
Levi’s (2005) first step to rubric creation, reflection. Reflection takes into consideration what is
desired from the learner, why the assessment is being created, what type of rubric is needed, and
other issues associated with the construction of a rubric (O’Donnell et al., 2011). Since the
involvement of key stakeholders in the creation of rubrics can provide several benefits, according
to O’Donnell et al. (2011), collaboration is important during this stage. Thus, a needs
assessment tool will be created and distributed to preceptors and unit educators. Feedback from
these reflections as well as information gleaned from meeting with staff educators will be
utilized for the next step.
DEVELOPMENT OF RUBRICS 9
Since collaboration is so beneficial to the creation of rubrics, the formation of a
collaborative team for supplying input throughout the creation process is the third objective.
Some of the benefits of collaboration during this time is the opportunity to discuss differences
and clarify misunderstandings, take a sense of ownership, and increase the chances of creating
rubrics that everyone will accept (O’Donnell et al., 2011). Activities proposed to take place to
support this objective include identifying interested participants and key stakeholders, emailing
possible team members, and creating a forum for exchange of ideas.
The fourth objective related to the goal of developing rubrics is to determine the care
standards for the rubrics being created. This objective correlates with the second step of defining
the specific learning issues and level of performance to be accomplished with the creation of
rubrics. According to Stevens and Levi (2005), team members should decide whether the
assessment is about knowledge content, skills, or both. Taxonomy guides can be used during
this time to clarify specific objectives and defining the level and type of learning expected
(O’Donnell et al., 2011). Scales defining the level of performance usually include three to five
levels such as “excellent”, “competent”, and “needs work” (Kirkpatrick & DeWitt, 2012;
O’Donnell et al., 2011). Reviewing the standards of practice and any other application materials
needed for the development of specified learning objectives will be done at this time.
Collaboration and input from team members will also be sought as an activity to support this
objective.
The final objective is to completely develop the rubrics and a tool for feedback. During
this stage of rubric development, items with similar expectations for performance are put
together and form the rubric dimensions (O’Donnell et al., 2011). The performance or task being
evaluated is broken down into components during this step (Kirkpatrick & DeWitt, 2012). The
DEVELOPMENT OF RUBRICS 10
fourth and final step to rubric creation is application, or the creation of the rubric grid. The last
activity to support this objective is to evaluate the rubrics. A rubric should be easy to use and
interpret, valid and reliable, and fair (Bargainnier, 2003; O’Donnell et al., 2011; Stevens &
Levin, 2005). In order to make effective revisions to rubrics that are meant to be flexible and
adaptable tools, evaluation of the rubrics is required (Stevens & Levi, 2005). A metarubric is a
rubric used to evaluate rubrics (Stevens & Levi, 2005). In addition, metarubrics are for
individual use in refining the rubric details. A metarubric will be created for evaluation of
rubrics created.
Rubric Implementation
The second goal of this project is to implement the rubrics for use by the preceptors
working with the interns. In order to achieve the final third of this project, two more key
objectives were again identified (see Appendix B). The first objective is to present the rubrics to
preceptors. This objective is important because it relates to the reliability and validity of the
rubrics. Validity refers to ensuring that the performance questioned is the performance being
measured by the rubric (O’Donnell et al., 2011). On the other hand, reliability is concerned with
consistency of ratings across multiple performances. According to O’Donnell et al. (2011), it is
best to give the raters the rubrics prior to implementation in order to increase accuracy. Thus,
time will be spent with those that will be utilizing the rubrics prior to actual trialing of the rubrics
on the units. Education materials, including a PowerPoint will be created. A chance to ask
questions and role play will be afforded to those that will be using the rubrics. Opportunities for
discussion can lead to better consistency and possible modifications to the rubrics (O’Donnell et
al., 2011).
DEVELOPMENT OF RUBRICS 11
The final objective for the implementation stage is to obtain feedback. Feedback from
team members, preceptors, educators, interns, and Ms. Hresko will be compiled during this time.
Direct feedback from those mentioned as well as indirect feedback related to the quality of
performance associated with the rubrics will be collected. Rubrics can be used by facilitators to
identify areas of student strengths and weaknesses, assisting with both formative and summative
assessment (O’Donnell et al., 2011). Self-evaluation using the metarubric will also be included
as an activity to support this objective.
Theoretical Framework
Cognitive learning theory focuses on students taking an active role in learning (Candela,
2012). When taking an active role in learning students must be able to demonstrate what they
know (Bargainnier, 2003). Cognitive learning theory focuses on mental processes and
acquisition of knowledge and not just learning how to perform a task (Candela, 2012). This
central component of cognitive learning theory is the basis for the proposed project and the
reason for moving from a checklist to measure competency in the clinical setting to using rubrics
to measure learning. According to Marcotte (2006), “well-designed rubrics help instructors in all
disciplines meaningfully assess the outcomes of the more complicated assignments that are the
basis of the problem-solving, inquiry-based, student-centered pedagogy replacing the traditional
lecture-based, teacher-centered approach in tertiary education” (para 3). Thus, the use of rubrics
for assessment and evaluation emphasizes the application and use of knowledge, not just
measurement of isolated, discrete knowledge (Bargainnier, 2003). This emphasis is central to
cognitive learning theory and its constructivist approach to knowledge attainment.
A second theory providing a framework for the proposed project is Resnick’s self-
efficacy theory. Central to this theory is the belief that an individual can perform any given task
DEVELOPMENT OF RUBRICS 12
after observation and demonstration if they believe there is a positive effect (Peterson & Bredow,
2009). If there is a positive result the individual will be self-motivated to perform each task.
This theory is important to apply throughout the implementation stages of the proposed project.
Using this theory as a foundation during the implementation stages takes into consideration the
belief that people have more control over what they do when they choose how to behave (Liehr
& Smith, 2008). Thus, by providing education, including a demonstration of how to use the
rubrics and their benefits, to all those involved with the use of the rubrics, it is more likely that
there will be positive outcomes. Several nursing studies have used Resnick’s self-efficacy theory
when focused on interventions related to behavioral change (Liehr & Smith, 2008). Since the
implementation phase of the proposed project focuses on changing behaviors related to
evaluation and precepting, this theory will best serve as the guiding framework during this phase.
The theory of self-efficacy could also be utilized when considering the words used to
describe the levels of performance during rubric creation. According to Bargainnier (2003),
rubrics should focus on positive attainment of desired performance. Thus, positive language
associated with descriptions can provide positive guidance. The idea is central to the theory of
self-efficacy.
Timeline
In order to support the goals and objectives of the proposed project a timeline of the
previously mentioned activities was also created and included along with the goals and
objectives (see Appendix B). This timeline will serve as a guideline to all activities required for
successful completion of this project within the given timeframe. This project will start on
January 13, 2014 and be completed by May 2, 2014. The first goal was set to be completed by
March 28, 2014 due in part to the timing of the critical care internship. The critical care
DEVELOPMENT OF RUBRICS 13
internship is set to begin the second week of March with general hospital orientation. The
interns will be starting on the specified critical care units beginning the last week in March.
Thus, the creation and education pertaining to the use of the rubrics is planned to be completed
when the interns are actually on their assigned units. The proposed project is planned to finish
with a two week trial use of the rubrics. Feedback will be compiled at the completion of the trial
period.
Evaluation Tool
An evaluation tool (see Appendix C) was created as a means of evaluating the goals for
the proposed project. The evaluation tool utilizes a Likert five point scale. Likert scales are the
most widely used scaling technique (Polit & Beck, 2012). A Likert scale allows the evaluator
the opportunity to express an opinion on a particular issue through indicating the degree to which
they agree or disagree (Bourke & Ihrke, 2012). Analysis of data from a Likert scale can be
computed mathematically in order to further understand evaluator attitudes. An area for
comment is also provided next to each evaluation criteria. The evaluation tool will be completed
by my preceptor and me at the end of this project.
Conclusion
Measuring clinical competence is an important job for the nurse educator. Although
rubrics have been embraced throughout much of academia and provide many benefits, their use
in evaluating nursing competence in the clinical setting has not been consistent. This paper
described a proposed project involving the creation and implementation of rubrics for use in the
critical care internship program at MMC. The preceptor that will serve as a guide in completing
the proposed project is Patti Hresko, a master’s prepared nurse. She is the internship coordinator
a key stakeholder to the proposed projects success. Several objectives and activities to support
DEVELOPMENT OF RUBRICS 14
the creation and implementation of rubrics for use in the critical care internship program at
MMC were detailed in Appendix B. Cognitive learning theory and Resnick’s self-efficacy
theory will serve as a foundation for carrying out the proposed project. A timeline of activities,
agency and preceptor agreements, and a tool for evaluation of the project were also included in
Appendices A, B, and C.
DEVELOPMENT OF RUBRICS 15
References
Bargainnier, S. (2003). Fundamentals of rubrics. Retrieved from
http://www.webpages.uidaho.edu/ele/scholars/practices/Evaluating_Projects/Resources/
Using_Rubrics.pdf
Bonnel, W. (2012). Clinical performance evaluation. In D. Billings & J. Halstead (Eds.),
Teaching in nursing: A guide for faculty (4th ed.). (pp. 485-502). St. Louis, MO: Elsevier
Saunders.
Bourke, M. P., & Ihrke, B. A. (2012). The evaluation process: An overview. In D. Billings & J.
Halstead (Eds.), Teaching in nursing: A guide for faculty (4th ed.). (pp. 422-440). St.
Louis, MO: Elsevier Saunders.
Candela, L. (2012). From teaching to learning: Theoretical foundations. In D. Billings & J.
Halstead (Eds.), Teaching in nursing: A guide for faculty (4th ed.). (pp. 202-243). St.
Louis, MO: Elsevier Saunders.
Fahy, A., Tuohy, D., McNamara, M. C., Butler, M., Cassidy, I., & Bradshaw, C. (2011).
Evaluating clinical competence assessment. Nursing Standard, 25(50), 42-48.
Frentsos, J. M. (2013). Rubrics role in measuring nursing staff competencies. Journal for Nurses
in Professional Development, 29(1), 19-23.
Kirkpatrick, J. M., & DeWitt, D. A. (2012). Strategies for assessing and evaluating learning
outcomes. In D. Billings & J. Halstead (Eds.), Teaching in nursing: A guide for faculty
(4th ed.). (pp. 441-463). St. Louis, MO: Elsevier Saunders.
Liehr, P., & Smith, M. J. (Eds.). (2008). Middle Range Theory for Nursing (2nd ed.). New York:
Springer Publishing Company.
DEVELOPMENT OF RUBRICS 16
Marcotte, M. (2006). Building a better mousetrap: The rubric debate. Viewpoints: Journal of
Developmental and Collegiate Teaching, Learning, and Assessment. Retrieved from
http://faculty.ccp.edu/dept/viewpoints/w06v7n2/rubrics1.htm
McGoldrick, K., & Peterson, B. (2013). Using rubrics in economics. International Review of
Economics Education, 12, 33-47.
Munson Medical Center [MMC]. (2013). New graduate critical care nurse internship. Retrieved
from http://www.munsonhealthcare.org/upload/docs/HR/Critical%20Care
%20Internship.pdf
National League for Nursing [NLN]. (2012). The scope of practice for academic nurse
educators 2012 revision. NY: Author.
O’Donnell, J.A., Oakley, M., Haney, S., O’Neill, P.N., & Taylor, D. (2011). Rubrics 101: A
primer for rubric development in dental education. Journal of Dental Education, 75(9),
1163-1175.
Peterson, S., & Bredow, T. (2009). Middle range theories: Application to nursing research (2nd
ed.). St. Paul, MN: Lippincott Williams & Wilkins.
Polit, D. F., & Beck, C. T. (2012). Nursing research: Generating and assessing evidence for
nursing practice (9th ed.). Philadelphia, PA: Lippincott Williams & Wilkins.
Rezaei, A. R., & Lovorn, M. (2010). Reliability and validity of rubrics for assessment through
writing. Assessing Writing, 15, 18-39.
Saxton, E., Belanger, S., & Becker, W. (2012). The critical thinking analytic rubric (CTAR):
Investigating intra-rater and inter-rater reliability of a scoring mechanism for critical
thinking performance assessments. Assessing Writing, 17(4), 251-271.
DEVELOPMENT OF RUBRICS 17
Stevens, D. D., & Levi, A. J. (2005). Introduction to rubrics: An assessment tool to save grading
time, convey effective feedback, and promote student learning. Sterling, VA: Stylus.
Retrieved from https://resources.oncourse.iu.edu/access/content/user/fpawan/L540%20_
%20CBI/steven-rubrics.pdf
Ulrich, B. (Ed.). (2012). Mastering precepting: A nurse’s handbook for success. Indianapolis,
IN: Sigma Theta Tau International.
Walvoord, B., & Anderson, V. A. (2010). Effective grading: A tool for learning and assessment.
San Francisco, CA: Jossey-Bass.
Xiaohua, H., & Canty, A. (2012). Empowering student learning through rubric-referenced self-
assessment. Journal of Chiropractic Education, 26(1), 24-31.
Appendix A
DEVELOPMENT OF RUBRICS 18
Agreements
DEVELOPMENT OF RUBRICS 19
Appendix B
Project Planning Guide
Title of Project: Development of Rubrics for the Clinical Setting
Goals Objectives Activities TimelineGoal 1: Create rubrics for measuring competency of the critical care interns at MMC
1.1 Obtain literature and other references containing information about the use of rubrics and their development, development of clinical competency, nursing skills checklists, clinical performance evaluation, educational documentation requirements, adult learning theories, and cognitive learning theory
1.2Perform a needs assessment as to what rubrics need to be created
1.3 Generate a team to collaborate with throughout the process
1.1aSearch CINAHL, PubMed, ERIC, and other databases for literature
1.1bCompile literature reference list
1.1cReview literature for relevance to proposed project
1.2aCompose a needs assessment tool evaluating preceptor perception of assessment and evaluation strategies
1.2bDistribute tool to preceptors involved in internship program and unit educators
1.2cCompile feedback
1.2dMeet with MMC staff educator(s) for any additional information and resources to utilize for the process
1.3aUse needs assessment feedback to identify interested participants to collaborate with throughout the process
1.1aComplete by Jan. 19, 2014
1.1bComplete by Jan 19
1.1cComplete by Jan 19
1.2aComplete by Jan 26
1.2bDistribute Jan. 27
1.2cComplete by Feb. 2
1.2dComplete byFeb. 9
1.3aComplete by Feb. 2
DEVELOPMENT OF RUBRICS 20
Goal 2: Implement rubrics for use by the preceptors working with the interns
1.4 Determine patient-care standards and evidence based practices for rubrics being created
1.5 Develop rubrics and tool for feedback
2.1 Present the rubrics to preceptors
1.3bSeek help through email to possible interested participants
1.3cCreate a forum/blog for open exchange of ideas and updating of information amongst team members
1.4aReview standards of practice for rubric possibilities
1.4bCollaborate with team and MMC staff educators for input
1.5aComplete steps three and four from Steven & Levi’s (2005) stages of rubric development including: grouping and labeling, and application
1.5bBuild a metarubric
2.1aDevelop PowerPoint and educational materials for educating preceptors about the rubrics
2.1bPresent the information to preceptors and collaborative team
2.1cTrial rubrics on the units
1.3bComplete by Feb. 4
1.3cComplete by Feb. 9
1.4aComplete byFeb. 16
1.4bComplete by Feb. 23
1.5aComplete by March 28
1.5bComplete by March 28
2.1aComplete by April 6
2.1bComplete by April 13
2.1cFrom April 13 - April 27
DEVELOPMENT OF RUBRICS 21
2.2 Obtain feedback to determine ease of use and reliability of created rubrics
2.2aCompile feedback from first two weeks of implementation using formative and summative evaluation
2.2aComplete by May 2
DEVELOPMENT OF RUBRICS 22
Appendix C
Evaluation
Student name: Erin Kibbey________________________________________________________
Evaluated by: __________________________________________________________________
Goal/Objective Strongly Disagree
Disagree
Neutral Agree Strongly Agree
Comments
Demonstrates ability to use literature to design evidence-based rubrics for use in the clinical settingDemonstrates ability to participate in interdisciplinary efforts to develop rubrics for use at MMC Demonstrates ability to create rubrics for measuring competency of the critical care interns at MMCRubrics submitted on time according to proposed guideTeaching strategies for rubric implementation are grounded in educational theory and evidence-based teaching practicesUses information technologies skillfully to support the teaching-learning processCommunication
DEVELOPMENT OF RUBRICS 23
with preceptor was appropriate and professionalDemonstrates ability to compile feedback on rubric implementation trial on unitsDemonstrates ability to use assessment and evaluation data to enhance the teaching-learning process
DEVELOPMENT OF RUBRICS 24
Bibliography
Adamson, K., & Kardong-Edgren, S. (2012). A method resources for assessing the reliability of
simulation evaluation instruments. Nursing Education Perspectives, 33(5), 334-339.
Adamson, K., Gubrud, P., Sideras, & Lasater, K. (2012). Assessing the reliability, validity, and
use of the Lasater clinical judgment rubric: Three approaches. Journal of Nursing
Education, 51(2), 66-73.
Allen, P., Lauchner, K., Bridges, R., Francis-Johnson, P., McBride, S., & Olivarez, A. (2008).
Evaluating continuing competency: A challenge for nursing. Journal of Continuing
Education in Nursing, 39(2), 81-85. doi:10.3928/00220124-20080201-02.
Ashcraft, A., & Opton, L. (2009). Evaluation of the Lasater clinical judgment rubric. Clinical
Simulation in Nursing, 5(3), e130.
Ashcraft, A., Opton, L., Bridges, R., Caballero, S., Veesart, A., & Weaver, C. (2013). Simulation
evaluation using a modified Lasater clinical judgment rubric. Nursing Education
Perspectives, 34(2), 122-126.
Bargainnier, S. (2003). Fundamentals of rubrics. Retrieved from
http://www.webpages.uidaho.edu/ele/scholars/practices/Evaluating_Projects/Resources/
Using_Rubrics.pdf
Blum, C., Borglund, S., & Parcells, D. (2010). High-fidelity nursing simulation: Impact on
student self-confidence and clinical competence. International Journal of Nursing
Education Scholarship, 7, 1-16. doi: 10.2202/1548-923X.2035.
Bonnel, W. (2012). Clinical performance evaluation. In D. Billings & J. Halstead (Eds.),
Teaching in nursing: A guide for faculty (4th ed.). (pp. 485-502). St. Louis, MO: Elsevier
Saunders.
DEVELOPMENT OF RUBRICS 25
Bourbonnais, F. F., Langford, S., & Giannantonia, L. (2008). Development of a clinical
evaluation tool for baccalaureate nursing students. Nurse Education in Practice, 8, 62-71.
Bourke, M. P., & Ihrke, B. A. (2012). The evaluation process: An overview. In D. Billings & J.
Halstead (Eds.), Teaching in nursing: A guide for faculty (4th ed.). (pp. 422-440). St.
Louis, MO: Elsevier Saunders.
Candela, L. (2012). From teaching to learning: Theoretical foundations. In D. Billings & J.
Halstead (Eds.), Teaching in nursing: A guide for faculty (4th ed.). (pp. 202-243). St.
Louis, MO: Elsevier Saunders.
Cato, M., Lasater, K., & Peeples, A. (2009). Nursing students' self-assessment of their simulation
experiences. Nursing Education Perspectives, 30(2), 105-108.
Connors, P. (2008). Assessing written evidence of critical thinking using an analytic rubric.
Journal of Nutrition Education and Behavior, 40(3), 193-194.
Cowan, D. T., Norman, I., & Coopamah, V. P. (2005). Competence in nursing practice: A
controversial concept – A focused review of literature. Nurse Education Today, 25(5),
355-362.
Cusack, L., & Smith, M. (2010). Power inequalities in the assessment of nursing competency
within the workplace: Implications for nursing management. Journal of Continuing
Education in Nursing, 41(9), 408-412. doi:10.3928/00220124-20100601-07.
Davis, A. H., & Kimble, L. P. (2011). Human patient simulation evaluation rubrics for nursing
education: Measuring the essentials of baccalaureate education for professional nursing
practice. Journal of Nursing Education, 50(11), 605-611. doi:10.3928/01484834-
20110715-01.
DEVELOPMENT OF RUBRICS 26
Dolan, G. (2003). Assessing student nurse clinical competency: Will we ever get it
right? Journal of Clinical Nursing, 12(1), 132-141. doi:10.1046/j.1365-
2702.2003.00665.x.
Fahy, A., Tuohy, D., McNamara, M. C., Butler, M., Cassidy, I., & Bradshaw, C. (2011).
Evaluating clinical competence assessment. Nursing Standard, 25(50), 42-48.
Frentsos, J. M. (2013). Rubrics role in measuring nursing staff competencies. Journal for Nurses
in Professional Development, 29(1), 19-23.
Gantt, L. (2010). Using the Clark simulation evaluation rubric with associate degree and
baccalaureate nursing students. Nursing Education Perspectives, 31(2), 101-105.
Gasaymeh, A. (2011). The implications of constructivism for rubric design and use. Paper
presented at the meeting of Higher Education International Conference, Beirut. Retrieved
from http://heic.info/assets/templates/heic2011/papers/05-Al-Mothana_Gasaymeh.pdf.
Gould, D., Berridge, E., & Kelly, D. (2006). The national healthservice knowledge and skills
framework and its implications for continuing professional development in nursing.
Nurse Education Today, 27, 26-34. doi:10.1016/j.nedt.2006.02.006.
Hall, M. A. (2013). An expanded look at evaluating clinical performance: Faculty use of
anecdotal notes in the U.S. and Canada. Nurse Education in Practice, 13(4), 271-276.
doi:10.1016/j.nepr.2013.02.001.
Hanley, E., & Higgins, A. (2005). Assessment of practice in intensive care: Students' perceptions
of a clinical competence assessment tool. Intensive and Critical Care Nursing, 21(5),
276-283.
DEVELOPMENT OF RUBRICS 27
Indhraratana, A., & Kaemkate, W. (2012). Developing and validating a tool to assess ethical
decision-making ability of nursing students, using rubrics. Journal of International
Education Research, 8(4), 393-398.
Isaacson, J., & Stacy, A. (2009). Rubrics for clinical evaluation: Objectifying the subjective
experience. Nurse Education in Practice, 9(2), 134-140. doi:10.1016/j.nepr.2008.10.015.
Jensen, R. (2013). Clinical reasoning during simulation: Comparison of student and faculty
ratings. Nurse Education in Practice, 13(1), 23-28. doi:10.1016/j.nepr.2012.07.001.
Jonsson, A., & Svingby, G. (2007). The use of scoring rubrics: Reliability, validity, and
educational consequences. Educational Research Review, 2, 130-144.
doi:10.1016/j.edurev.2007.05.002.
Kirkpatrick, J. M., & DeWitt, D. A. (2012). Strategies for assessing and evaluating learning
outcomes. In D. Billings & J. Halstead (Eds.), Teaching in nursing: A guide for faculty
(4th ed.). (pp. 441-463). St. Louis, MO: Elsevier Saunders.
Knowles, M.S. (1980). The modern practice of adult learning. Chicago, IL: Follett.
Lasater, K. (2007). Clinical judgment development: Using simulation to create a rubric. Journal
of Nursing Education, 46, 496-503.
Lasater, K. (2011). Clinical judgment: the last frontier for evaluation. Nurse Education in
Practice, 11(2), 86-92. doi:10.1016/j.nepr.2010.11.013
Lasater, K., & Nielsen, A. (2009). Reflective journaling for clinical judgment development and
evaluation. Journal of Nursing Education, 48(1), 40-44.
Lenburg, C. B., Abdur-Rahman, V. Z., Spencer, T. S., Boyer, S. A., & Klein, C. J. (2011).
Implementing the COPA model in nursing education and practice settings: Promoting
DEVELOPMENT OF RUBRICS 28
competence, quality care, and patient safety. Nursing Education Perspectives, 32(5), 290-
296. doi:10.5480/1536-5026-32.5.290.
Liehr, P., & Smith, M. J. (Eds.). (2008). Middle Range Theory for Nursing (2nd ed.). New York:
Springer Publishing Company.
Marcotte, M. (2006). Building a better mousetrap: The rubric debate. Viewpoints: Journal of
Developmental and Collegiate Teaching, Learning, and Assessment. Retrieved from
http://faculty.ccp.edu/dept/viewpoints/w06v7n2/rubrics1.htm
McCarthy, B., & Murphy, S. (2008). Assessing undergraduate nursing students in clinical
practice: Do preceptors use assessment strategies? Nurse Education Today, 28(3), 301-
313. doi:10.1016/j.nedt/2007.06.002.
McGoldrick, K., & Peterson, B. (2013). Using rubrics in economics. International Review of
Economics Education, 12, 33-47.
Munson Medical Center [MMC]. (2013). New graduate critical care nurse internship. Retrieved
from http://www.munsonhealthcare.org/upload/docs/HR/Critical%20Care
%20Internship.pdf
National League for Nursing [NLN]. (2012). The scope of practice for academic nurse
educators 2012 revision. NY: Author.
Nicholson, P., Gillis, S., & Dunning, A. (2009). The use of scoring rubrics to determine clinical
performance in the operating suite. Nurse Education Today, 29(1), 73-82.
doi:10.1016/j.nedt.2008.06.011.
Northern Illinois University Faculty Development and Instructional Design Center. (n.d.).
Rubrics for assessment. Retrieved from
http://www.niu.edu/facdev/resources/guide/assessment/rubrics_for_assessment.pdf
DEVELOPMENT OF RUBRICS 29
O’Donnell, J.A., Oakley, M., Haney, S., O’Neill, P.N., & Taylor, D. (2011). Rubrics 101: A
primer for rubric development in dental education. Journal of Dental Education, 75(9),
1163-1175.
Peterson, S., & Bredow, T. (2009). Middle range theories: Application to nursing research (2nd
ed.). St. Paul, MN: Lippincott Williams & Wilkins.
Polit, D. F., & Beck, C. T. (2012). Nursing research: Generating and assessing evidence for
nursing practice (9th ed.). Philadelphia, PA: Lippincott Williams & Wilkins.
Reddy, Y. M., & Andrade, H. (2010). A review of rubric use in higher education. Assessment &
Evaluation in Higher Education, 35(4), 435-448.
Rezaei, A. R., & Lovorn, M. (2010). Reliability and validity of rubrics for assessment through
writing. Assessing Writing, 15, 18-39.
Riitta-Liisa, A., Suominen, T., & Leino-Kilpi, H. (2008). Competence in intensive and critical
care nursing: A literature review. Intensive and Critical Care Nursing, 24(2), 78-89.
doi:10.1016/j.iccn.2007.11.006.
Robb, Y., Fleming, V., & Dietert, C. (2002). Measurement of clinical performance of nurses: A
literature review. Nurse Education Today, 22, 293-300. doi:10.1054/nedt.2001.0714.
Roberts, D. (2013). The clinical viva: An assessment of clinical thinking. Nurse Education
Today, 33(4), 402-406.
Saxton, E., Belanger, S., & Becker, W. (2012). The critical thinking analytic rubric (CTAR):
Investigating intra-rater and inter-rater reliability of a scoring mechanism for critical
thinking performance assessments. Assessing Writing, 17(4), 251-271.
DEVELOPMENT OF RUBRICS 30
Shipman, D., Roa, M., Hooten, J., & Wang, Z. (2012). Using the analytic rubric as an evaluation
tool in nursing education: The positive and the negative. Nurse Education Today, 32(3),
246-249. doi: 10.1016/j.nedt.2011.04.007.
Stevens, D. D., & Levi, A. J. (2005). Introduction to rubrics: An assessment tool to save grading
time, convey effective feedback, and promote student learning. Sterling, VA: Stylus.
Retrieved from https://resources.oncourse.iu.edu/access/content/user/fpawan/L540%20_
%20CBI/steven-rubrics.pdf
Steffan, K., & Goodin, H. (2010). Preceptors' perceptions of a new evaluation tool used during
nursing orientation. Journal for Nurses in Staff Development, 26(3), 116-122.
doi:10.1097/NND.0b013e31819aa116.
Tanner, C. (2006). Thinking like a nurse: A research-based model of clinical judgment in
nursing. Journal of Nursing Education, 45(6), 204-211.
The Teaching, Learning, and Technology Group. (n.d.). Rubrics: Definition, tools, examples,
references. Retrieved from http://www.tltgroup.org/resources/flashlight/rubrics.htm
Ulfvarson, J., & Oxelmark, L. (2012). Developing an assessment tool for intended learning
outcomes in clinical practice for nursing students. Nurse Education Today, 32(6), 703-
708.
Victor-Chmil, J., & Larew, C. (2013). Psychometric properties of the Lasater clinical judgment
rubric. International Journal of Nursing Education Scholarship, 10(1), 1-8.
doi:10.1515/ijnes-2012-0030.
Walsh, C. M., Seldomridge, L. A., & Badros, K. K. (2008). Developing a practical evaluation
tool for preceptor use. Nurse Educator, 33(3), 113-117.
DEVELOPMENT OF RUBRICS 31
Walvoord, B., & Anderson, V. A. (2010). Effective grading: A tool for learning and assessment.
San Francisco, CA: Jossey-Bass.
Waters, C., Rochester, S., & Mcmillan, M. (2012). Drivers for renewal and reform of
contemporary nursing curricula: A blueprint for change. Contemporary Nurse: A Journal
for the Australian Nursing Profession, 41(2), 206-215.
Xiaohua, H., & Canty, A. (2012). Empowering student learning through rubric-referenced self-
assessment. Journal of Chiropractic Education, 26(1), 24-31.