Chapter 8 2
-
Upload
mansooreh-alavi -
Category
Documents
-
view
44 -
download
3
Transcript of Chapter 8 2
![Page 2: Chapter 8 2](https://reader033.fdocuments.us/reader033/viewer/2022042817/55a218731a28abe2118b47f7/html5/thumbnails/2.jpg)
8.3.2 CUSTOM-MADE CODING SYSTEM8.3.2.1 QUESTION FORMATION
The researchers needed a coding scheme: allow them to identify how the learners question formation changed over
time.
To code the data, Mackey & Philp designated the questions produced by their child learners as belonging to one of the six stages based on the Pienemann-John hierarchy. The modified version is on Table 8.6.
![Page 3: Chapter 8 2](https://reader033.fdocuments.us/reader033/viewer/2022042817/55a218731a28abe2118b47f7/html5/thumbnails/3.jpg)
![Page 4: Chapter 8 2](https://reader033.fdocuments.us/reader033/viewer/2022042817/55a218731a28abe2118b47f7/html5/thumbnails/4.jpg)
After the stages, the next step:
The next step of the coding involved the assignment of an overall stage to each learner, based on two highest-level question forms asked in two different tests.
It was then possible to examine whether the learners had improved over time.
determine the highest level stage
![Page 5: Chapter 8 2](https://reader033.fdocuments.us/reader033/viewer/2022042817/55a218731a28abe2118b47f7/html5/thumbnails/5.jpg)
Table 8.7 Coding for Question Stage
ID Pretest Immediate Posttest Delayed Posttest
Task Task Task Final Task Task Task Final Task Task Task Final
1 2 3 Stage 1 2 3 Stage 1 2 3 Stage
AB 3 3 2 3 3 3 3 3 3 3 2 3
AA 3 3 3 3 5 5 4 5 5 5 4 5
AC 3 4 3 3 2 2 3 2 3 3 3 3
AD 3 3 4 4 3 5 5 5 5 3 3 3
Learner AB continues throughout the study at the third stage.
Learner AA began the study at stage 3 & continued through the next three posttest at Stage 5.
Once this sort of coding has been carried out, the researcher can make decisions about the analysis.
![Page 6: Chapter 8 2](https://reader033.fdocuments.us/reader033/viewer/2022042817/55a218731a28abe2118b47f7/html5/thumbnails/6.jpg)
8.3.2.2 NEGATIVE FEEDBACK
Oliver developed a hierarchical coding system for analysis that first divided all teacher-student and NS-NNS conversations into three parts:
→Native Speaker – Nonnative Speaker
(1) NNS’s initial turn (2) the response given by the teacher or NS
partner (3) the NNS’ reaction → each part was subjected to further coding.
![Page 7: Chapter 8 2](https://reader033.fdocuments.us/reader033/viewer/2022042817/55a218731a28abe2118b47f7/html5/thumbnails/7.jpg)
Figure 8.1 Three-turn coding scheme
rated as
Initial Turn → Correct Non-target Incomplete
↙ ↓ ↘NS Response → Ignore Negative Feedback Continue
↙ ↓ ↘NNS Response → Response Ignore No Chance
As with many schemes, this one is top-down, known as hierarchical, & the categories are mutually exclusive. → meaning that it is possible to code each piece of data in only one way.
![Page 8: Chapter 8 2](https://reader033.fdocuments.us/reader033/viewer/2022042817/55a218731a28abe2118b47f7/html5/thumbnails/8.jpg)
8.3.2.3 CLASSROOM INTERACTION
Next turn was examined to determine : (1) whether the error was occurred (2) whether it was ignored If the error was corrected, the following turn
was examined and coded according to (1) whether the learner produced uptake (2) whether the topic was continued. Finally, the talk following uptake was examined
with regard to (1) whether the uptake was reinforced (2) or the topic continued.
![Page 9: Chapter 8 2](https://reader033.fdocuments.us/reader033/viewer/2022042817/55a218731a28abe2118b47f7/html5/thumbnails/9.jpg)
8.3.2.4 SECOND LANGUAGE WRITING INSTRUCTION
Two studies used coding categories: (1) Adams (2003): → investigate the effects of written error
correction on learners’ subsequent 2nd language writing
(2) Sachs & Polis (2004) → compared three feedback conditions
![Page 10: Chapter 8 2](https://reader033.fdocuments.us/reader033/viewer/2022042817/55a218731a28abe2118b47f7/html5/thumbnails/10.jpg)
The researchers used different coding schemes to fit the question to compare the four feedback conditions with each other.
(1) original error (s) (+) (2) completely corrected (0) (3) completely unchanged (-) (4) not applicable (n/a) Adams coded individual forms as: (1) more targetlike (2) not more targetlike (3) not attempted (avoided)
Sachs & Polio considered T-unit codings of “at least partially changed” (+) to be possible evidence of noticing even when the forms were not completely more targetlike.
![Page 11: Chapter 8 2](https://reader033.fdocuments.us/reader033/viewer/2022042817/55a218731a28abe2118b47f7/html5/thumbnails/11.jpg)
8.3.2.5. TASK PLANNING The effects of planning on task performance (fluency,
accuracy, and complexity.) Yuan and Ellis (2003): Through operationalization
(1)Fluency: (a) number of syllables per minute, and (b) number of meaningful syllables per minute, where repeated or reformulated syllables were not counted.
(2) Complexity: syntactic complexity, the ratio of clauses to t-units; syntactic variety, the total number of different grammatical verb forms used; and mean segmental type-token ration.
(3) Accuracy: the percentage of error-free clauses, and correct verb forms (the percentage of accurately used verb forms).
Benefit of a coding system: is similar enough to those used in previous studies that results are comparable, while also finely grained enough to capture new information.
![Page 12: Chapter 8 2](https://reader033.fdocuments.us/reader033/viewer/2022042817/55a218731a28abe2118b47f7/html5/thumbnails/12.jpg)
8.3.3 CODING QUALITATIVE DATA(1) The schemes for qualitative coding generally
emerge from the data (open coding).
The range of variation within individual categories: can assist in the procedure of adapting and finalizing the coding system, with the goal of closely reflecting and representing the data
Examining the data for emergent patterns and themes, by looking for anything pertinent to the research question or problem
New insights and observations that are not derived from the research question or literature review may important.
![Page 13: Chapter 8 2](https://reader033.fdocuments.us/reader033/viewer/2022042817/55a218731a28abe2118b47f7/html5/thumbnails/13.jpg)
8.3.3 CODING QUALITATIVE DATA(2) Themes and topics should emerge from the
first round of insights into the data, when the researcher begins to consider what chunks of data fit together, and which, if any, are independent categories.
Problem: With developing highly specific coding schemes, it can be
problematic tocompare qualitative coding and results across studies and
contexts. Watson-Gegeo (1988): “Although it may not be possible to compare coding between
settings on a surface level, it may still be possible to do so on an abstract level.”
![Page 14: Chapter 8 2](https://reader033.fdocuments.us/reader033/viewer/2022042817/55a218731a28abe2118b47f7/html5/thumbnails/14.jpg)
8.4. INTERRATER RELIABILITY(1)
Reliability of a test or measurement based on the degree of similarity of results obtained from different researchers using the same equipment and method. If interrater reliability is high, results will be very similar.
Only one coder and no intracoder reliability measures, the reader’s confidence in the conclusions of the study may be undermined.
To increase confidence: (1)More than one rater code the data
wherever possible (2)Carefully select and train the raters Keep coders selectively blind about what part of
the data or for which group they are coding, in order to reduce the possibility of inadvertent coder biases.
![Page 15: Chapter 8 2](https://reader033.fdocuments.us/reader033/viewer/2022042817/55a218731a28abe2118b47f7/html5/thumbnails/15.jpg)
8.4. INTERRATER RELIABILITY(2)
To increase rater reliability: to schedule coding in rounds or trials to reduce boredom or drift
How much data should be coded: as much as is feasible give the time and resources available for the study
Consider the nature of the coding scheme in determining how much data should be coded by a second rater
With highly objective, low-inference coding schemes, it is possible to establish confidence in rater reliability with as little as 10% of the data
![Page 16: Chapter 8 2](https://reader033.fdocuments.us/reader033/viewer/2022042817/55a218731a28abe2118b47f7/html5/thumbnails/16.jpg)
8.4.1.1. SIMPLE PERCENTAGE AGREEMENT This is the ratio of all coding agreements
over the total number of coding decisions made by the coders (appropriate for continuous data).
The drawback: to ignore the possibility that some of the agreement may have occurred by chance
![Page 17: Chapter 8 2](https://reader033.fdocuments.us/reader033/viewer/2022042817/55a218731a28abe2118b47f7/html5/thumbnails/17.jpg)
8.4.1.2. COHEN’S KAPPA
This statistic represents the average rate of agreement for an entire set of scores, accounting for the frequency of both agreements and disagreements by category.
In a dichotomous coding scheme ( like targetlike or nontargetlike):
(1)First coder: targetlike, nontargetlike (2)Second coder: targetlike, nontargetlike (3)First and Second coders: targetlike
It also accounts for chance.
![Page 18: Chapter 8 2](https://reader033.fdocuments.us/reader033/viewer/2022042817/55a218731a28abe2118b47f7/html5/thumbnails/18.jpg)
8.4.1.3. ADDITIONAL MEASURES OF RELIABILITY
Pearson’s Product Moment or Spearman Rank Correlation Coefficients: are based on measures of correlation and reflect the degree of association between the ratings provided by two raters.
![Page 19: Chapter 8 2](https://reader033.fdocuments.us/reader033/viewer/2022042817/55a218731a28abe2118b47f7/html5/thumbnails/19.jpg)
8.4.1.4. GOOD PRACTICE GUIDELINES FOR INTERRATER RELIABILITY “There is no well-developed framework for
choosing appropriate reliability measures.” (Rust&Cooil 1994)
General good practice guidelines suggest that researchers should state:
(1)Which measure was used to calculate interrater reliability
(2)What the score was (3)Briefly explain why that particular
measure was chosen.
![Page 20: Chapter 8 2](https://reader033.fdocuments.us/reader033/viewer/2022042817/55a218731a28abe2118b47f7/html5/thumbnails/20.jpg)
8.4.1.5 HOW DATA ARE SELECTED FOR INTERRATER RELIABILITY TESTS Semi-randomly select a portion of the data
(say 25%), then coded by a second raterTo create comprehensive datasets for
random selection of the 25% from different parts of the main dataset
If a pretest and three posttests are used, data from each of them should be included in the 25%.
Intrarater reliability refers to whether a rater will assign the same score after a set time period.
![Page 21: Chapter 8 2](https://reader033.fdocuments.us/reader033/viewer/2022042817/55a218731a28abe2118b47f7/html5/thumbnails/21.jpg)
8.4.1.6. WHEN TO CARRY OUT CODING RELIABILITY CHECKS
To use a sample dataset to train themselves and their other coders, and test out their coding scheme early on in the coding process
The following reporting on coding: (1)What measure was used
(2)The amount of data coded (3)Number of raters employed (4)Rationale for choosing the measurement used (5)Interrater reliability statistics (6)What happened to data about which there was
disagreement
Complete reporting will help the researcher provide a solid foundation for the claims made in the study, and will also facilitate the process of replicating studies.
![Page 22: Chapter 8 2](https://reader033.fdocuments.us/reader033/viewer/2022042817/55a218731a28abe2118b47f7/html5/thumbnails/22.jpg)
8.5. THE MECHANICS OF CODING
(1)Using highlighting pens, working directly on transcripts.
(2)Listening to tapes or watching videotapes without transcribing everything: May simply mark coding sheets, when the phenomena researchers are interested in occur.
(3)Using computer programs (CALL programs).
![Page 23: Chapter 8 2](https://reader033.fdocuments.us/reader033/viewer/2022042817/55a218731a28abe2118b47f7/html5/thumbnails/23.jpg)
8.5.1. HOW MUCH TO CODE
(1)Consider and justify why they are not coding all their data.
(2)Determining how much of the data to code. ( data sampling or data segmentation)
(3)The data must be representative of the dataset as a whole and should also be appropriate for comparisons if these are being made.
(4)The research questions should ultimately drive the decisions made, and to specify principled reasons for selecting data to code.
![Page 24: Chapter 8 2](https://reader033.fdocuments.us/reader033/viewer/2022042817/55a218731a28abe2118b47f7/html5/thumbnails/24.jpg)
8.5.2 WHEN TO MAKE CODING DECISIONS How to code and who much to code
prior to the data collection process Carrying out an adequate pilot study:
This will allow for piloting not only of materials and methods, but also of coding and analysis.
The most effective way to avoid potential problems: Designing coding sheets ahead of data collection and then testing them out in a pilot study
![Page 25: Chapter 8 2](https://reader033.fdocuments.us/reader033/viewer/2022042817/55a218731a28abe2118b47f7/html5/thumbnails/25.jpg)
8.6. CONCLUSION
Many of processes involved in data coding can be thought through ahead of time and then pilot tested.
These include the preparation of raw data for coding, transcription, the modification or creation of appropriate coding systems, and the plan for determining reliability.