electve

download electve

of 11

Transcript of electve

  • 8/3/2019 electve

    1/11

    RANZ PART Documentation - Documentation is any written or electronically generated

    information about a client that describes the care or service provided to that client.

    Health records may be paper documents or electronic documents, such as

    electronic medical records, faxes, e-mails, audio or video tapes and images. Through

    documentation, nurses communicate their observations, decisions, actions and

    outcomes of these actions for clients. Documentation is an accurate account of what

    occurred and when it occurred. Directives for Documentation Requirements for

    documentation and the sharing, retention and disposal of this information are drawn

    from several sources: statutory regulations; Standards of Practice; agency policies and

    procedures; and legal principles. Statutory Regulation There are no laws stating

    specifically how and what nurses must document. Agencies generally develop

    documentation policies which reflect provincial and federal government statutes and/or

    other relevant documents. Examples of statutes and documents guide policy Statutes

    British Columbia Coroners Act Health Professions Act Child, Family and Community

    Service Act Hospital Act Controlled Drug and Substances Act (Federal) Health Care(Consent) and Facilities Act Electronic Transactions Act Limitation Act Evidence Act

    Medical Practitioners Act Freedom of Information and Protection of Privacy Act Mental

    Health Act Health Act

    DOCUMENTATION FORMATS

    1. Narrative Documentation - provides pertinent information written mainly in

    paragraph format. Narrative Example: Date: 3/3/04 Patient: John Smith Pt. RTC

    reporting no adverse effects from tx last visit or from HEP. He stated that he feels asthough his wrist & ankle are moving a little better and the edema in the hand has _. He

    reports that he is able to shower (I) using a plastic chair in the tub and feels like he has

    improved c his ability to dress himself. AROM of the (L) wrist is as follows: flexion 30,

    extension 30, UD 15, RD 20, supination 45, and pronation 60; (L) knee: 0-135; (L)

    ankle DF-PF 5-45. Figure 8 wrist girth is 35.5 cm and ankle figure 8 girth is 43 cm on

    the (L). Pt. is ambulating household distances (I) c cx using (L) UE platform, PWB 50%

    on the (L) LE. (I) with all

    2. SOAP Notes (subjective, objective, assessment, and plan) - is a method of

    documentation employed by health care providers to write out notes in a patient's chart,along with other common formats, such as the admission note. Example: Surgery

    Service, Dr. Jones S: No Chest Pain or Shortness of Breath. "Feeling better today."

    Patient reports flatus. O: Afebrile, P 84, R 16, BP 130/82. No acute distress. Neck no

    JVD, Lungs clear Cor RRR Abd Bowel sounds present, mild RLQ tenderness, less than

    yesterday. Wounds look clean. Ext without edema A: Patient is a 37 year old man on

    post-operative day 2 for laparoscopic appendectomy, recently passed flatus. P:

  • 8/3/2019 electve

    2/11

    Recovering well. Advance diet. Continue to monitor labs. Prepare for discharge home

    tomorrow morning.

    3. POMR (problem-oriented medical record) - Method of recording data about

    the health status of a patient in a problem-solving system. POMR preserves the data in

    an easily accessible way that encourages ongoing assessment and revision of thehealth care plan by all members of the health care team. Example: Case

    Signalment: K9 5 yo, f/s, Miniature Schnauzer "Tessa" Hx/PE: Tessa ate a pork chop 2

    days ago. Approximately 8 hours ago, he became depressed and started vomiting. She

    has vomited a clear, yellow fluid 6 to 8 times. She is completely anorexic now. She has

    no previous medical illness or surgery except for spay after one litter at 3 years of age.

    She has had no known adverse drug reactions nor is there any history of trauma or

    toxin exposure. She is an indoor dog and current on vaccinations and heartworm

    preventative. She eats Kibbles and Bits free choice and some people food. No

    C/S/D/PUPD. PE: 10 kg. QAR, 7%deH2O, T=102.0, P=140, R=40, mm-pink, 1-2 sec

    2cm SC soft mass on the right flank tense abdomen, resentful of palpation no other

    abnormal findings IPL: 1. Vomiting (ch by anorexia, deH2O, abdominal pain,

    depression) 2. Subcutaneous mass (right flank) (this is what you do after you've seen

    the patient in the exam room) PROBLEMS DxR/O DxPlans RxPlans CE 1. vomiting

    primary GI (eg. obstruction, inflammation, toxic) vs. secondary GI (pancreatic, renal,

    adrenal, hepatic) abdominal rads, pretx CBC, UA, chem panel w/ lipase NPO, IV LRS

    (700 cc replace, 600 cc maint., 100 cc ong. loss) need supportive care, minimal dx risk

    2. subcutaneous mass inflammation; benign or malignant neoplasia, trauma FNA w/

    cytology None pending results

    Signature---------------

    --- Source: http://vetsites.vin.com/kidney/POMR.html

    4. Functional Outcome Reporting - Highlights how your clients injuries affect

    their daily lives and how your massage allows them to make progress towards resuming

    their regular activities. The assessment of client status, the interventions carried out and

    the impact of the interventions on client outcomes are organized under the headings of

    data, action and response. Example: Data: Subjective and/or objective information that

    supports the stated focus or describes the client status at the time of a significant event

    or intervention. Action: Completed or planned nursing interventions based on thenurses assessment of the clients status. Response: Description of the impact of the

    interventions on client outcomes. Piper, a postal carrier who can't walk more than 20

    steps without severe pain and fatigue when she comes for her first massage session.

    After her third session, Piper can now walk 50 feet with moderate pain and fatigue and

    up to a 100 feet before feeling severe pain and fatigue. If you focus on pain, Piper is

    likely to report that she is still suffering severe pain and fatigue when walking and you

  • 8/3/2019 electve

    3/11

    might conclude that she hasn't made any progress. If you focus on her activities of daily

    living, you'll notice that she can now walk around her house, and therefore is making

    progress.

    Source:http://www.wincityinc.com/products/massagesoapnotes/1.1/prod01_massageso

    apnotes_functionalreporting.htm

    5. Focus Charting - Method of documentation, the nurse identifies a focus

    based on client concerns or behaviors determined during the assessment. Example:

    Data: Subjective and/or objective information that supports the stated focus or describes

    the client status at the time of a significant event or intervention. Action: Completed or

    planned nursing interventions based on the nurses assessment of the clients status.

    Response: Description of the impact of the interventions on client outcomes.

    Source: https://www.crnbc.ca/Standards/Lists/StandardResources/151NursingDocumen

    tation.pdf

    Tools for Documentation

    1. Worksheets and kardexes - Nurses use worksheets to organize the care they

    provide, and to manage their time and multiple priorities. Kardexes are used to

    communicate current orders, upcoming tests or surgeries, special diets or the use of

    aids for independent living specific to an individual client (College of Nurses of Ontario,

    2002). If a paper format is used, entries may be erasable as long as the assessment,

    nursing interventions carried out and the impact of these interventions on client

    outcomes are documented in the permanent health record. When the kardex is the onlydocumentation of the clients care plan, it is kept as part of the permanent record.

    2. Client care plans - Care plans are outlines of care for individual clients and make up

    part of the permanent health record. Care plansare written in ink (unless electronic), up-

    to-date and clearly identify the needs and wishes of the client.

    3. Flow sheets and checklists - Flow sheets and checklists are used to document

    routine care and observations that are recorded on a regular basis (e.g., activities of

    daily living, vital signs, intake and output). Flow sheets and checklists are part of the

    permanent health record, and can be used as evidence in legal proceedings (College of

    Nurses of Ontario, 2002). Symbols (e.g., check marks) may be used on flow sheets or

    checklists as long as it is clear who performed the assessment or intervention and the

    meaning of each of the symbols is identified in agency policy.

    4. Care maps and clinical pathways -Care maps and clinical pathways outline what

    care will be done and what outcomes are expected over a specified time frame for a

    usual client within a case type or grouping. Nurses individualize care maps and clinical

  • 8/3/2019 electve

    4/11

    pathways to meet clients specific needs (e.g., by making changes to items that are not

    appropriate). If the status of clients varies from that outlined on the care map or clinical

    pathway at a particular time period, the variance is documented, including the reasons

    and action plan to address it.

    5. Monitoring strips -Monitoring strips (e.g., cardiac, fetal or thermal monitoring; bloodpressure testing) provide important assessment data and are included as part of the

    permanent health record.

    JANILEYS PART

    Use of Technology

    1.Electronic Documentation - A clients electronic health record is a collection of

    the personal health information of a single individual, entered or accepted by healthcare providers, and stored electronically, under strict security.

    2. Fax Transmission - convenient and efficient method for communicating

    information between health care providers. Protection of client confidentiality is the most

    significant risk in fax transmission and special precautions are required when using this

    form of technology.

    3.Electronic Mail - The use of e-mail by health care organizations and health

    care professionals is becoming more widespread as a result of its speed, reliability,

    convenience and low cost. Unfortunately the factors that make the use of e-mail so

    advantageous also pose significant confidentiality, security and legal risks.

    4. Telenursing - Nurses who provide telephone care are required to document

    the telephone interaction. Documentation may occur in a written form (e.g., log book or

    client record form) or via computer. Standardized protocols that guide the information

    obtained from the caller and the advice given are useful in both providing and

    documenting telephone nursing care.

    Source: https://www.crnbc.ca/Standards/Lists/StandardResources/151NursingDocumen

    tation.pdf

  • 8/3/2019 electve

    5/11

    ALISSAS PART

    Evaluation As defined by the American Evaluation Association evaluation involves

    assessing the strengths and weaknesses of programs, policies, personnel, products,

    and organizations to improve their effectiveness.

    Evaluation is the systematic collection and analysis of data needed to make decisions,

    a process in which most well-run programs engage from the outset.

    Evaluation is the systematic collection and analysis of data needed to make decisions, a

    process in which most well-run programs engage from the outset. Here are just some of

    the evaluation activities that are already likely to be incorporated into many programs or

    that can be added easily:

    Pinpointing the services needed for example, finding out what knowledge, skills,

    attitudes, or behaviors a program should address

    Establishing program objectives and deciding the particular evidence (such as the

    specific knowledge, attitudes, or behavior) that will demonstrate that the objectives

    have been met. A key to successful evaluation is a set of clear, measurable, and

    realistic program objectives. If objectives are unrealistically optimistic or are not

    measurable, the program may not be able to demonstrate that it has been

    successful even if it has done a good job

    Developing or selecting from among alternative program approaches for example,

    trying different curricula or policies and determining which ones best achieve the

    goals

    Tracking program objectives for example, setting up a system that shows who gets

    services, how much service is delivered, how participants rate the services they

    receive, and which approaches are most readily adopted by staff

    Trying out and assessing new program designs determining the extent to which a

    particular approach is being implemented faithfully by school or agency personnel or

    the extent to which it attracts or retains participants.

    Rossi and Freeman (1993) define evaluation as "the systematic application of social

    research procedures for assessing the conceptualization, design, implementation, and

    utility of ... programs." There are many other similar definitions and explanations of

  • 8/3/2019 electve

    6/11

    "what evaluation is" in the literature. Our view is that, although each definition, and in

    fact, each evaluation is slightly different, there are several different steps that are

    usually followed in any evaluation. It is these steps which guide the questions

    organizing this handbook. An overview of the steps of a "typical" evaluation follows.

    The Goals of Evaluation

    The generic goal of most evaluations is to provide "useful feedback" to a variety of

    audiences including sponsors, donors, client-groups, administrators, staff, and other

    relevant constituencies. Most often, feedback is perceived as "useful" if it aids in

    decision-making. But the relationship between an evaluation and its impact is not a

    simple one -- studies that seem critical sometimes fail to influence short-term decisions,

    and studies that initially seem to have no influence can have a delayed impact when

  • 8/3/2019 electve

    7/11

    more congenial conditions arise. Despite this, there is broad consensus that the major

    goal of evaluation should be to influence decision-making or policy formulation through

    the provision of empirically-driven feedback.

    NARCIS PART

    Evaluation Strategies 'Evaluation strategies' means broad, overarching perspectives on

    evaluation. They encompass the most general groups or "camps" of evaluators;

    although, at its best, evaluation work borrows eclectically from the perspectives of all

    these camps. Four major groups of evaluation strategies are discussed here. Scientific-

    experimental models are probably the most historically dominant evaluation strategies.

    Taking their values and methods from the sciences -- especially the social sciences --

    they prioritize on the desirability of impartiality, accuracy, objectivity and the validity of

    the information generated. Included under scientific-experimental models would be: thetradition of experimental and quasi-experimental designs; objectives-based research

    that comes from education; econometrically-oriented perspectives including cost-

    effectiveness and cost-benefit analysis; and the recent articulation of theory-driven

    evaluation. The second class of strategies are management-oriented systems models.

    Two of the most common of these are PERT, the Program Evaluation

    and Review Technique, and CPM, the Critical Path Method. Both have been widely

    used in business and government in this country. It would also be legitimate to include

    the Logical Framework or "Logframe" model developed at U.S. Agency for International

    Development and general systems theory and operations research approaches in this

    category. Two management-oriented systems models were originated by evaluators:the UTOS model where U stands for Units, T for Treatments, O for Observing

    Observations and S for Settings; and the CIPP model where the C stands for Context,

    the I for Input, the first P for Process and the second P for Product. These

    management-oriented systems models emphasize comprehensiveness in evaluation,

    placing evaluation within a larger framework of organizational activities. The third class

    of strategies are the qualitative/anthropological models. They emphasize the importance

    of observation, the need to retain the phenomenological quality of the evaluation

    context, and the value of subjective human interpretation in the evaluation process.

    Included in this category are the approaches known in evaluation as naturalistic or

    'Fourth Generation' evaluation; the various qualitative schools; critical theory and art

    criticism approaches; and, the 'grounded theory' approach of Glaser and Strauss among

    others. Finally, a fourth class of strategies is termed participant-oriented models. As the

    term suggests, they emphasize the central importance of the evaluation participants,

    especially clients and users of the program or technology. Client-centered and

    stakeholder approaches are examples of participant-oriented models, as are consumer-

    oriented evaluation systems. Types of Evaluation Process (also called "methods")

  • 8/3/2019 electve

    8/11

    Process evaluation examines the procedures and tasks involved in implementing a

    program. This type of evaluation also can look at the administrative and organizational

    aspects of the program. Process evaluation monitors the program to ensure feedback

    during the course of the program. Impact (also called "outcome objectives") Impact

    evaluation is the most comprehensive of the four evaluation types. It is desirable

    because it focuses on the long-range results of the program and changes or

    improvements in health status as a result. However, impact evaluations are rarely

    possible because they are frequently costly and involve extended commitment. Also,

    the results often cannot be directly related to the effects of an activity or program

    because of other (external) influences on the target audience, which occur over time.

    Information obtained from an impact study may include: Changes in morbidity and

    mortality Changes in absenteeism from work Long-term maintenance of desired

    behavior Rate or recidivism Outcome (also called "bridging objectives") Outcome

    evaluation is used to obtain descriptive data on a project and to document short-term

    results. Task-focused results are those that describe the output of the activity (e.g., thenumber of public inquiries received as a result of a public service announcement).

    Short-term results describe the immediate effects of the project on the target audience

    (e.g., percent of the target audience showing increased awareness of the subject).

    Information that can result from an outcome evaluation includes: Knowledge and

    attitude changes Expressed intentions of the target audience Short-term or intermediate

    behavior shifts Policies initiated or other institutional changes made Formative

    Formative evaluation, including pre-testing, is designed to assess the strengths and

    weaknesses or materials or campaign strategies before implementation. Formative

    research tailors the program to the target audience. Messages or products are tested by

    a small group before they are implemented on a large scale. This type or evaluationpermits necessary revisions before the full effort goes forward. Its basic purpose is to

    maximize the change for program success before the activity starts. Summative Any

    combination measurements and judgments that permit conclusions to be drawn about

    impact, outcome, or benefits of a program or method. Three Levels of Evaluation

    Project-Level Evaluation Project-level evaluation is the evaluation that project directors

    are responsible for locally.The project director, with appropriate staff and with input from

    board members and other relevant stakeholders, determines the critical evaluation

    questions, decides whether to use an internal evaluator or hire an external consultant,

    and conducts and guides the project-level evaluation.The Foundation provides

    assistance as needed. The primary goal of project-level evaluation is to improve and

    strengthen Kellogg-funded projects. the consistent, ongoing collection and analysis of

    information for use in decision making. Consistent Collection of Information If the

    answers to your questions are to be reliable and believable to your projects

    stakeholders, the evaluation must collect information in a consistent and thoughtful

    way.This collection of information can involve individual interviews, written surveys,

  • 8/3/2019 electve

    9/11

    focus groups, observation, or numerical information such as the number of

    participants.While the methods used to collect information can and should vary from

    project to project, the consistent collection of information means having thought through

    what information you need, and having developed a system for collecting and analyzing

    this information. The key to collecting data is to collect it from multiple sources and

    perspectives, and to use a variety of methods for collecting information.The best

    evaluations engage an evaluation team to analyze, interpret, and build consensus on

    the meaning of the data, and to reduce the likelihood of wrong or invalid interpretations.

    Use in Decision Making Since there is no single, best approach to evaluation which

    can be used in all situations, it is important to decide the purpose of the evaluation, the

    questions you want to answer, and which methods will give you usable information that

    you can trust. Even if you decide to hire an external consultant to assist with the

    evaluation, you, your staff, and relevant stakeholders should play an active role in

    addressing these questions.You know the project best, and ultimately you know what

    you need. In addition, because you are one of the primary users of evaluationinformation, and because the quality of your decisions depends on good information, it

    is better to have negative information you can trust than positive information in which

    you have little faith. Again, the purpose of project-level evaluation is not just to prove,

    but also to improve. People who manage innovative projects have enough to do without

    trying to collect information that cannot be used by someone with a stake in the project.

    By determining who will use the information you collect, what information they are likely

    to want, and how they are going to use it, you can decide what questions need to be

    answered through your evaluation. Project-level evaluation should not be a stand-alone

    activity, nor should it occur only at the end of a program. Project staff should think about

    how evaluation can become an integrated part of the project, providing importantinformation about program management and service delivery decisions. Evaluation

    should be ongoing and occur at every phase of a projects development, from

    preplanning to start-up to implementation and even to expansion or replication phases.

    For each of these phases, the most relevant questions to ask and the evaluation

    activities may differ.What remains the same, however, is that evaluation assists project

    staff, and community partners make effective decisions to continuously strengthen and

    improve the initiative.

    RAINIERS PART EVALUATION TOOLS Evaluation Matrix Although by all

    appearances, the "Evaluation Matrix" is a very simple tool, it has a powerful purpose. It

    helps you to consider a wider range of data collection methods than you might

    otherwise consider in relation to each of the questions addressed by your evaluation.

    Evaluators sometimes get into the habit of using one or other data collection method,

    e.g., an end-of-training questionnaire, without considering the advantages of alternative

  • 8/3/2019 electve

    10/11

    methods. This tool prompts you to consider each evaluation question and to decide

    which of the many data collection options have the greatest potential for providing the

    desired information. Anecdotal Record Form Evaluation data does not have to be

    reported as "cold hard statistics." Often you will want to tell the "human story" involved

    in your development or implementation project. One way of capturing those important

    stories and critical incidents that provide the human story is the "Anecdotal Record

    Form." Participants in an interactive multimedia design project can use this instrument

    to describe a noteworthy event and to offer their own interpretation of its relevance. It is

    very important to try to complete an Anecdotal Record Form as soon as possible after a

    critical event has occurred so as not to forget critical information. It is equally important

    to separate your description of the incident from your interpretation of it! Expert Review

    Checklist Expert review is one of the primary evaluation strategies used in both

    formative (How can this multimedia program be improved?) and summative (What is the

    effectiveness and worth of this multimedia program?) evaluation. It is often a good idea

    to provide experts with some sort of instrument or guide to insure that they critique all ofthe important aspects of the IMM program that you want reviewed. This "Expert Review

    Checklist" has been designed for use by an instructional design expert. You would

    employ different sorts of Expert Review Checklists with different types of experts such

    as a content expert or a human computer interface expert. Focus Group Protocol

    Focus groups are a powerful means of collecting data about learner or instructor

    reactions to a new interactive multimedia program. However, focus groups need to be

    carefully planned so that you get the kind and quality of information you are seeking.

    This "Focus Group Protocol" is a brief example of a list of questions that might be

    addressed during a focus group regarding an interactive multimedia program.

    Formative Review Log The "Formative Review Log" is a simple instrument that can beused by anyone you have asked to review your program in its formative stages. The

    instrument has three columns, the first for recording the screen or format sheet number

    that the person is reviewing, the second for writing down observations (e.g., errors,

    confusing points, or ideas), and the third for recording what actions have been taken in

    reaction to the feedback provided by members of the project team. Using an instrument

    like this with many different types of users will probably have the greatest pay-off for

    formative evaluation throughout the life of the project. Implementation Log It is one

    thing to plan and develop a good interactive multimedia program. It is entirely another

    thing to implement it as planned. Many training innovations have failed because

    implementation factors (such as instructor motivation) were not considered. It is

    essential to make every effort to collect information regarding the actual use of an

    interactive multimedia program as compared to the planned use. The "Implementation

    Log" tool has been designed to make that comparison a little more systematic. Interview

    Protocol Interviews are a powerful means of collecting data about learner or instructor

    reactions to a new interactive multimedia program. However, interviews need to be

  • 8/3/2019 electve

    11/11

    carefully planned so that you get the kind and quality of information you are seeking.

    This "Interview Protocol" is a brief example of a list of questions that might be

    addressed during an interview regarding an interactive multimedia program.

    Questionnaire Questionnaires are undoubtedly the single most frequently used type of

    evaluation instrument. Poorly designed questionnaires are often administered at the

    close of a course or training session as a "smilometer" or "happiness indicator." They

    are also often distributed to users of interactive multimedia programs. If the only thing

    you find out about your interactive multimedia program with a questionnaire is whether

    the trainees liked it, you are not making good use of this strategy. As shown in the

    "Questionnaire," a wealth of information can be provided by a well-designed instrument.

    User Interface Rating Form The "User Interface" of an interactive instructional

    product, e.g, a multimedia program, is a critical element of the product that must be

    carefully evaluated. If the user interface is not well-designed, learners will have little

    opportunity to learn from the program. This rating form includes ten major criteria for

    assessing the user interface for an interactive program, such as "ease of use" and"screen design." Not all of the criteria may be relevant to the particular program you are

    evaluating, but most of them will. You may need to add additional criteria to the list.

    Novice users of interactive instructional products are generally not good candidates for

    using this form. The people rating the user interface should be experienced users of the

    type of program you are asking them to rate. Even better, they could be experienced

    designers of interactive programs. Evaluation Report Sample The "Evaluation Report

    Sample" presents one way of structuring an evaluation report. Evaluation reports are

    notorious for being weighty volumes that few people read. Not surprisingly, lengthy

    reports have little effect on decision-makers. This tool illustrates a strategy for dividing

    an evaluation report into two-page sections that each include four parts: 1) anattention-getting headline, 2) a description of the major issues related to the headline, 3)

    a presentation of data related to the issues, and 4) a bottom-line recommendation or

    summary of the findings. People who receive a report in this format can take two or

    three sections at a time and make them agenda items for their team meetings. In this

    way, the evaluation findings are much more likely to have an impact on practical

    decisions.