SPECIAL TOPIC - christyhiett.wiki.westga.educhristyhiett.wiki.westga.edu/file/view/Reading fluency...

21
School Psychology Review, 2008, Volume 37, No, 1, pp, 18-37 SPECIAL TOPIC Reading Fluency as a Predictor of Reading Proficiency in Low-Performing, High-Poverty Schools Scott K. Baker Pacific Institutes for Research Keith Smolkowski Abacus Consulting and Oregon Research Institute Rachell Katz and Hank Fien University of Oregon Johri R. Seeley Abacus Consulting and Oregon Research Institute Edward J. Kame'enui National Center for Special Education Research, U.S. Department of Education Carrie Thomas Beck University of Oregon Abstract. The purpose of this study was to examine oral reading fluency (ORF) in the context of a large-scale federal reading initiative conducted in low performing, high poverty schools. The objectives were to (a) investigate the relation between ORF and comprehensive reading tests, (b) examine whether slope of performance over time on ORF predicted performance on comprehensive reading tests over and above initial level of performance, and (c) test how well various models of ORF and performance on high stakes reading tests in Year 1 predicted perfor- mance on high-stakes reading tests in Year 2, Subjects were four cohorts of students in Grades 1-3, with each cohort representing approximately 2,400 students. Results support the use of ORF in the early grades to screen students for reading problems and monitor reading growth over time. The use of ORF in reading reform and implications for school psychologists are discussed. This work was supported by an Oregon Reading First subcontract from the Oregon Department of Education to the University of Oregon (8948). The original Oregon Reading First award was granted by the U,S, Department of Education to the Oregon Departinent of Education (S357A0020038), Correspondence regarding this article should be addressed to Scott K, Baker, University of Oregon, Pacific Institutes for Research, 1600 Millrace Drive, Suite 109, Eugene, OR 97403; e-mail: sbaker@uoregon,edu Copyright 2008 by the National Association of School Psychologists, ISSN 0279-6015 18

Transcript of SPECIAL TOPIC - christyhiett.wiki.westga.educhristyhiett.wiki.westga.edu/file/view/Reading fluency...

Page 1: SPECIAL TOPIC - christyhiett.wiki.westga.educhristyhiett.wiki.westga.edu/file/view/Reading fluency as a... · SPECIAL TOPIC Reading Fluency as a ... mance on high-stakes reading tests

School Psychology Review,2008, Volume 37, No, 1, pp, 18-37

SPECIAL TOPIC

Reading Fluency as a Predictor of Reading Proficiency inLow-Performing, High-Poverty Schools

Scott K. BakerPacific Institutes for Research

Keith SmolkowskiAbacus Consulting and Oregon Research Institute

Rachell Katz and Hank FienUniversity of Oregon

Johri R. SeeleyAbacus Consulting and Oregon Research Institute

Edward J. Kame'enuiNational Center for Special Education Research, U.S. Department of Education

Carrie Thomas BeckUniversity of Oregon

Abstract. The purpose of this study was to examine oral reading fluency (ORF) inthe context of a large-scale federal reading initiative conducted in low performing,high poverty schools. The objectives were to (a) investigate the relation betweenORF and comprehensive reading tests, (b) examine whether slope of performanceover time on ORF predicted performance on comprehensive reading tests overand above initial level of performance, and (c) test how well various models ofORF and performance on high stakes reading tests in Year 1 predicted perfor-mance on high-stakes reading tests in Year 2, Subjects were four cohorts ofstudents in Grades 1-3, with each cohort representing approximately 2,400students. Results support the use of ORF in the early grades to screen students forreading problems and monitor reading growth over time. The use of ORF inreading reform and implications for school psychologists are discussed.

This work was supported by an Oregon Reading First subcontract from the Oregon Department ofEducation to the University of Oregon (8948). The original Oregon Reading First award was granted bythe U,S, Department of Education to the Oregon Departinent of Education (S357A0020038),

Correspondence regarding this article should be addressed to Scott K, Baker, University of Oregon, PacificInstitutes for Research, 1600 Millrace Drive, Suite 109, Eugene, OR 97403; e-mail: sbaker@uoregon,edu

Copyright 2008 by the National Association of School Psychologists, ISSN 0279-6015

18

Page 2: SPECIAL TOPIC - christyhiett.wiki.westga.educhristyhiett.wiki.westga.edu/file/view/Reading fluency as a... · SPECIAL TOPIC Reading Fluency as a ... mance on high-stakes reading tests

Reading Fluency as a Predictor of Reading Proficiency

Qver 90% of the approximately 1,600districts and 5,283 schools in the United Statesthat have implemented Reading First (www.readingfirstsupport.us) use oral reading flu-ency (ORF) to screen students for readingproblems and monitor reading progress overtime (Greenberg, S., Howe, K., Levi, S., &Roberts, G., personal communications, 2006).Other major education reforms, such as re-sponse to intervention (Individuals With Dis-abilities Education Improvement Act, 2004),have also significantly increased the use ofORF to assess reading performance. Commonacross these reforms is a focus on interveningearly and intensively to address reading prob-lems. An extensive research base in specialeducation and general education providesstrong support for the use of ORF as a measureof reading proficiency, but few studies haveinvestigated the use of this measure in a na-tionwide federal reading initiative such asReading First. This study addresses the use ofORF as an index of reading proficiency and asa meas'ure of student progress over time in thecontext of Reading First in Oregon (http://www.pde.state.or.us/search/results/?id=96).

The roots of ORF lie in curriculum-based ¡measurement (CBM), a set of proce-dures for measuring academic proficiency inbasic skill areas including reading, spelling,written expression, and mathematics (Deno,1985; Deno & Mirkin, 1977; Fuchs & Fuchs,2002; Shinn, 1989, 1998). ORF is the mostthoroughly studied of all CBM measures andhas generated the most empirical support forits use. On ORF, students typically read astory or passage from grade-level reading ma-terial, 'and the number of words read correctlyin 1 nun constitutes the student's performancescore. '

Piere is strong theoretical support forreading fiuency as an important component ofreading competence. LaBerge and Samuels(1974) hypothesized that automaticity of read-ing was directly connected to reading compre-hension. Based on this model of reading de-velopment, it is hypothesized that effortlessword-level reading frees up attention re-sources that can be devoted specifically to

comprehension (Adams, 1990; National Read-ing Panel, 2000). Posner and Snyder (1975)suggested two context-based expectancy pro-cesses that facilitate word recognition. Thefirst consists of "automatic fast-spreading se-mantic activation" (Jenkins, Fuchs, van denBroek, Espin, & Deno, 2003, p. 720) that doesnot require conscious attention. The second"involves slow-acting, attention-demanding,conscious use of surrounding context for wordidentification" (Jenkins et al., 2003, p. 720).Stanovich (1980) proposed that reading flu-ency results from bottom-up (print driven) andtop-down processes (meaning driven) that op-erate concurrently when a reader confronts aword in context. Skilled readers rarely rely onconscious bottom-up processes to read wordsbecause word recognition is virtually auto-matic. Poor readers rely more on the contextof the sentence to read words accurately be-cause their bottom-up processes are inefficientand unrehable (Stanovich, 2000). Althoughthere are important differences between thesemodels, they all assert that efficient word rec-ognition processes free up resources for com-prehension. In addition, many studies haveempirically demonstrated the association be-tween ORF and overall reading proficiency,including comprehension.

ORF as an Index of Reading Proficiency

Deno, Mirkin, and Chiang (1982) pub-lished the first validity study on ORF. FiveCBM measures were administered to studentsin special and general education (Grades 1-5).Students read words in a word list, read un-derlined words in passages, read words in in-tact passages (i.e., ORF), identified missingwords in passages (i.e., cloze), and stated themeaning of underlined words in passages.ORF was the strongest measure, correlatingwith published criterion measures between .71and .91. ORF correlated higher with publishedmeasures of reading comprehension than didcloze or word meaning, which were consid-ered more direct measures of overall reading.

In a second important validity study,Fuchs, Fuchs, and Maxwell (1988) investi-gated CBM reading measures with middle

19

Page 3: SPECIAL TOPIC - christyhiett.wiki.westga.educhristyhiett.wiki.westga.edu/file/view/Reading fluency as a... · SPECIAL TOPIC Reading Fluency as a ... mance on high-stakes reading tests

School Psychology Review, 2008, Volume 37, No. 1

school students in special education. Again, arange of CBM measures were investigatedincluded answering questions, recall, andcloze tests. ORF was the strongest measure ona number of grounds. It correlated higher withtwo Stanford Achievement Test subtests thanthe other CBM measures. It correlated higherwith each of the CBM measures than any ofthe others. Perhaps most important, ORF cor-related higher with the reading comprehensioncriterion measure (.92) than it did with thedecoding criterion measure (.81). In otherwords, ORF was more strongly related tocomprehension than decoding, a pattern thathas been replicated in other studies (Shinn,Good, Knutson, Tilly, & Collins, 1992).

Numerous additional early studies werepublished establishing the validity of ORF as ameasure of overall reading proficiency. One ofthe major conclusions of this research is thatcorrelations between ORF and published mea-sures of reading proficiency, including readingcomprehension, are consistently moderate tostrong in value, generally ranging from .60 to.90 (see Marston, 1989, and Shinn, 1998, forreviews of the research on ORF).

In the context of No Child Left Behind(2002), in which annual reading assessmentsare required beginning in Grade 3, a numberof studies have examined the relation betweenORF and performance on state reading assess-ments. These correlational studies have con-firmed the moderate to strong association be-tween ORF and overall measures of readingproficiency. For example. Grade 3 correlationsbetween ORF and the reading test of the Col-orado Student Assessment Program rangedfrom .73 to .80 (Shaw & Shaw, 2002). AtGrades 4 and 5, correlations between the ORFand Colorado Student Assessment Programwere .67 and .75 (Wood, 2006). McGlincheyand Hixson (2004) studied the relation be-tween ORF and student performance on thereading test of the Michigan Educational As-sessment Program from 1995 to 2002. Corre-lations by year ranged from .49 to .83, with anoverall correlation calculated across years of.67. When ORF and the state reading tests inNorth Carolina and Arizona were adminis-tered in the spring of Grade 3, the correlations

20

were .73 (Barger, 2003) and .74 (Wilson,2005). The correlation between ORF adminis-tered in Grades 3 and 4 and the reading portionof the Ohio Proficiency Test ranged from .61to .65 (Vander Meer, Lentz, & Stollar, 2005).

Researchers at the University of Michi-gan (Schilling, Carlisle, Scott, & Zeng, 2007)studied the predictive and concurrent validityof ORF with the Iowa Test of Basic Skills inGrades 1-3 in Michigan Reading Firstschools. ORF correlations with the Iowa Testof Basic Skills total reading score ranged from.65 to .75, and with the Iowa Test of BasicSkills reading comprehension subtest rangedfrom .63 to .75. Finally, Stage and Jacobsen(2001) reported correlations of .50, .51, and.51 between fall, winter, and spring adminis-trations of ORF and the Washington Assess-ment of Student Learning (WASL) in Grade 4.The authors conjectured that the use of short,written answers and extended response itemson the WASL, not strictly reading measures,may have led to lower correlations than usu-ally reported involving ORF.

The consistent link between ORF andcriterion measures of reading performance hasbeen established primarily with students inGrades 3 and higher. Consequently, thesestudies are quite relevant in the context of NoChild Left Behind (2002), in which annualassessments are required beginning inGrade 3. In Reading First, however, readingoutcome assessments are also used in Grades 1and 2 and frequently in kindergarten. Thus, itis important to understand the link betweenORF and comprehensive measures of readingbefore Grade 3.

A specific focus of the current study isthe link between ORF and specific high-stakesstatewide reading tests in Grades 1-3. In thisstudy, we refer to comprehensive measures ofreading administered in Grades 1-3 as high-stakes assessments, even though for No ChildLeft Behind purposes high-stakes testing be-gins in Grade 3. However, in Reading First inOregon and other states, comprehensive read-ing assessments administered at the end ofGrades 1 and 2 are also used to make decisionsabout continued support in Reading First andother "high-stakes" decisions. When we refer

Page 4: SPECIAL TOPIC - christyhiett.wiki.westga.educhristyhiett.wiki.westga.edu/file/view/Reading fluency as a... · SPECIAL TOPIC Reading Fluency as a ... mance on high-stakes reading tests

Reading Fluency as a Predictor of Reading Proficiency

to high-Stakes reading tests, we are referring tothe specific tests investigated in this study andnot all high-stakes reading tests.

Oral Reading Fluency as an Index ofReading Growth Over Time

Most research has focused on ORF as ameasure of reading performance at a singlepoint in time. Few studies have examined ORFas a direct index of reading growth over time.In the study by Deno et al. (1982), ORF per-formance increased as students moved up ingrades, providing cross-sectional evidence ofgrowth over time. Hasbrouck and Tindal(1992, 2006) presented normative perfor-mance information on ORF in the fall, winter,and spring in Grades 2-5 and found that astime within year and grade level increased,student I performance increased. The cross sec-tional data showed that students grew fastestin Grades 2 and 3. Not surprisingly, growthrates are also related to student reading diffi-culty. Deno, Fuchs, Marston, and Shin (2001)found that first-grade students in general edu-cation ¡demonstrated more than twice thegrowth! in ORF than their peers in specialeducation.

To investigate typical growth rates forchildren over time, Fuchs, Fuchs, Hamlett,Walz, and Germann (1993) conducted the firstlongitudinal study on ORF. Different studentswere assessed in Grades 1-6, but in eachgrade the same students were tested repeatedlyover time. The number of students in eachgrade ranged from 14 to 25. Results showedthat slope of performance decreased consis-tently across grades. Average increases perweek were 2.10, 1.46, 1.08, 0.84, 0.49,and 0.32 across Grades 1-6, respectively.These results are consistent with the cross-sectional findings reported by Deno et al.(1982)i and Hasbrouck and Tindal (1992,2006).:

Higher rates of growth in Grades 1 and 2provides support for early reading interven-tions, assuming that increased ORF growth isassociated with real reading growth, as mea-sured by a comprehensive measure of reading.Speece and Ritchey (2005) provided partial

II

support for the importance of growth on ORFby demonstrating that students who hadhealthy rates of growth in Grade 1 were morelikely to maintain these growth rates inGrade 2 and also were more likely to endGrade 2 at grade level than students who hadlow rates of growth. In line with Deno et al.(2001), Speece and Ritchey (2005) also foundthat risk factors predicted growth on ORF infirst grade. Using growth curve analysis forstudents at risk for reading problems at thebeginning of first grade, they had predictedORF scores at the end of the year that wereless than half the magnitude of their peers notat risk (M = 20 vs. 56.9). Performance at theend of the year was based on 20 weekly ORFassessments administered from January toMay. Speece and Ritchey (2005), however,did not investigate whether ORF slope wasassociated with performance on a strong cri-terion measure of overall reading proficiency.

Stage and Jacobsen (2001) investigatedthe value of ORF slope across fourth grade topredict performance on the WASL state test.They found that slope was not a significantpredictor of WASL. However, their analysismay not have enabled a clear view of the valueof slope. Stage and Jacobsen fit a hierarchicallinear model of ORF that estimated an inter-cept in May and a slope for the preceding year.They then predicted intercept and slope witha later, end-of-year administration of theWASL. Next, Stage and Jacobsen (a) savedORF slopes from their hierarchical linearmodel; (b) computed fall, winter, and springestimates from the slopes; and (c) entered fall,winter, spring, and slope estimates into a re-gression model. Because they computed thefall, winter, and spring ORF estimates fromthe slope, the four variables in the regressionlikely led to multicoUinearity, if not a lineardependency, inflating standard errors andyielding tests of statistical significance that arehighly problematic (Cohen, Cohen, West, &Aiken, 2003).

Fuchs, Fuchs, and Compton (2004) con-ducted a study that examined how well leveland slope on two CBMs, Word IdentificationFluency and Nonsense Word Fluency, pre-dicted performance on criterion measures of

21

Page 5: SPECIAL TOPIC - christyhiett.wiki.westga.educhristyhiett.wiki.westga.edu/file/view/Reading fluency as a... · SPECIAL TOPIC Reading Fluency as a ... mance on high-stakes reading tests

School Psychology Review, 2008, Volume 37, No. 1

reading, including the Woodcock-Johnsonreading subtests (Woodcock & Johnson,1989), Word Attack and Word Identification,and the Comprehension Reading AssessmentBattery. Over a 1-year period, correlations be-tween slope on Word Identification Fluencyand criterion measures ranged from .50 to .85and on Nonsense Word Fluency, the slopecorrelations ranged from .27 to .58. The Fuchset al. study is similar, conceptually, to thecurrent study because it linked slope of per-formance to criterion measures of reading.There are, however, two important differencesbetween Fuchs et al. and the present study.First, the slope measure in the current study isORF. Second, in Fuchs et al , the slope wasestimated via a two-stage model and its con-tribution evaluated independent of students'initial performance level. The effect of slope isdifficult to interpret in a model without theintercept, especially if the intercept correlateswith slope (positively or negatively), as isoften the case with academic tests.

In the current study, we were interestedin the contribution of slope, controlling forinitial level of performance. Initial level ofperformance on ORF, and other screeningmeasures, is used to identify struggling read-ers who may require more intensive instruc-tion. Change in performance over time is in-terpreted in the context of initial level of per-formance. The assumption is that change onORF represents real progress that students aremaking in leaming to read, and the degree towhich students catch up with grade-level peersis based on their initial level of performanceand the growth they make over time. Previousstudies have not examined the degree to whichchange in ORF over time is actually related tobetter performance on specific high-stakesreading tests. Our major focus is what contri-bution slope makes to predicting performanceon an outcome, after controlling for initiallevel of performance.

Purpose of the Study and ResearchQuestions

Three objectives guided this study. Thefirst was to investigate the relation between

22

ORF and specific high-stakes reading tests forall students in Oregon Reading First. We ex-pected the magnitude of association to bemoderate to strong, consistent with prior re-search. The second objective was to examinewhether slope on ORF predicted performanceon specific high-stakes reading tests over andabove initial level of ORF performance alone.Our question was, after controlling for initiallevel of performance on ORF in the middle ofGrade 1, or the beginning of Grade 2, doesgrowth on ORF add significantly to the pre-diction of performance on specific high-stakesreading measures at the end of Grades 2 and3? Our prediction was that slope would addsignificantly to prediction accuracy.

The third objective was to test how wellvarious models that included ORF and perfor-mance on specific high-stakes reading tests inYear 1 predicted performance on specifichigh-stakes reading tests in Year 2. In partic-ular, we were interested in testing how wellORF stood up in prediction models that in-cluded a comprehensive measure of reading inthe model. We expected that even under pre-diction models that included a comprehensivemeasure of reading, ORF would still provideimportant information in the prediction, con-sistent with the findings of Wood (2006).Thus, we wanted to know if performance onORF were known, would performance on spe-cific high-stakes reading tests in Year 1 con-tribute additional information in predicting per-formance on specific high-stakes reading testsin Year 2? We also wanted to know if perfor-mance on specific high-stakes reading tests inYear 1 were known, would additional infor-mation about ORF add significantly to theprediction accuracy of specific high-stakesreading tests in Year 2. We expected ORFlevel of performance and high-stakes tests inYear 1 to predict performance on high-stakestests in Year 2 about equally well. We alsoexpected ORF slope to account for additionalvariance in overall reading proficiency, be-yond information provided by ORF level orhigh-stakes reading tests in Year 1.

We address Objectives 2 and 3 sepa-rately for high-stakes reading measures inGrades 2 and 3. In the results section, we

Page 6: SPECIAL TOPIC - christyhiett.wiki.westga.educhristyhiett.wiki.westga.edu/file/view/Reading fluency as a... · SPECIAL TOPIC Reading Fluency as a ... mance on high-stakes reading tests

Reading Fluency as a Predictor of Reading Proficiency

Table 1Descriptive Data on Oral Reading Fluency and High-Stakes Primary

Reading Test by Student Cohort

j

i¡1

Student Cohort

Cohort 4: YCohort 3: YCohort 3: YCohort 2: YCohort 2: YCohort 1: Y

2 GI G2 GI G2 GI G

112233

Number

248924842417240923672329

Oral Reading Fluency

BeginningMean(5D)

37.22 (30.06)32.82 (30.34)62.46 (35.55)58.44 (35.97)

MiddleMean(SD)

24.13(27.50)20.54 (25.60)63.08(38.51)58.02 (38.60)79.62 (39.63)76.54(40.04)

Measure

EndMean(.SD)

45.67(33.71)41.27 (32.47)80.18(39.99)74.89 (40.36)97.45 (39.51)94.10(40.67)

Primary Reading Test

SAT-10Mean(SD)

542.47 (46.78)536.03 (45.82)584.27 (43.09)578.89 (44.03)

OSRAMean(SD)

209.59 (10.56)208.73(11.98)

Note. SAT-10 = Stanford Achievement Test—Tenth Edition; OSRA = Oregon State Reading Assessment; Y = year;G = grade. Number of participants represents the number present at the fall assessment administration. Means andstandard'deviations for SAT-10 are scaled scores. Oral Reading Fluency is the raw score expressed as correct words perminute and the SAT-10 and OSRA are scaled scores.

address objectives 2 and 3 for students with ahigh-stakes reading measure in Grade 2, andthen we address Objectives 2 and 3 for stu-dents with a high-stakes reading measure inGrade 3.

Method

Participants and Settingi

Students from 34 Oregon Reading Firstschools participated in this study. All 34schools were funded in the first cycle of Read-ing First and represented 16 independentschool districts, located in most regions of thestate. Half of the schools were in large urbanareas and the rest of the schools were approx-imately equally divided between midsize citieswith populations between 50,000 and 100,000(8 schools) and rural areas (9 schools). In the2003-2004 school year, 10% of the studentsreceived special education services and 32%percent of the students were English leamers.Approximately 68% of the English leamerswere ¡Latino students; the remaining wereAsianjstudents, American Indians, and Hawai-ian Pacific Islanders.

Schools eligible for Reading First metspecific criteria for student poverty level and

reading performance. During the year prior toReading First implementation (2002-2003),69% of students across all Reading Firstschools qualified for free or reduced-costlunch rates, and 27% of third-graders did notpass minimum proficiency standards on theOregon Statewide Reading Assessment. Theoverall state average for free or reduced-costlunch in 2002-2003 was 44%, and 18% of thethird-grade students did not pass the third-grade test.

Data were collected during the first 2years of Oregon Reading First implementa-tion. Four cohorts of students participated,with each cohort representing approxi-mately 2,400 students (see Table 1). Data fromCohort 1 were collected in Year 1 only (2003-2004) and included only students who were inGrade 3. In Year 2 (2004-2005), these stu-dents were in Grade 4, no longer in ReadingFirst, and consequently did not provide datafor analysis. Data from Cohorts 2 and 3 werecollected in Years 1 and 2. Cohort 2 was inGrade 2 in Year 1 and Grade 3 in Year 2.Cohort 3 was in Grade 1 in Year 1 and inGrade 2 in Year 2. Data from Cohort 4 werecollected in Year 2 only. These students werein Grade 1 in the second year of data collection.

23

Page 7: SPECIAL TOPIC - christyhiett.wiki.westga.educhristyhiett.wiki.westga.edu/file/view/Reading fluency as a... · SPECIAL TOPIC Reading Fluency as a ... mance on high-stakes reading tests

School Psychology Review, 2008, Volume 37, No. 1

In Year 1, students in Cohort 4 were in kinder-garten and not administered ORF measures.

In Oregon Reading First, virtually allstudents in kindergarten through third gradeparticipated in four assessments per year. Inthe fall, winter, and spring, students were ad-ministered Dynamic Indicators of Basic EarlyLiteracy Skills (DIBELS) measures (Kaminski& Good, 1996) as part of benchmark testing.In Grades 1-3, the primary DIBELS measurewas ORF. In the spring, students were admin-istered a high-stakes reading test at the end ofthe year. A small percentage of students wereexcluded from high-stakes testing. In Grades 1and 2, 3.3% and 3.6% of students were ex-empted from testing based on criteria recom-mended by the publisher. As with all longitu-dinal studies, some students failed to providedata for one or more assessments. In Grades 2and 3, 10-13% of the students were missingdata for any given ORF assessment. InGrade 1, 5% of students were missing thewinter ORF assessment and 7% were missingthe spring assessment. Student data were in-cluded in the analysis if they had (a) at leastone ORF data point and (b) a valid score onone high stakes assessment either in Year 1or 2. We assumed that data were missing atrandom and analyzed them with maximumlikelihood methods that use all data availableto minimize bias (Littie & Rubin, 2002).

Oregon Reading First Implementation

Each Oregon Reading First school pro-vided at least 90 min of daily, scientificallybased reading instmction for all kindergartenthrough third-grade students with a minimumof 30 min of daily small-group, teacher-di-rected reading instruction. Instruction was fo-cused on the essential elements of beginningreading (National Reading Panel, 2000): pho-nemic awareness, alphabetic principle, fiu-ency, vocabulary, and comprehension. Groupsize, curricular programs, and instmctionalemphases were determined according to stu-dent instmctional needs based on screeningand progress-monitoring data. For example,students were carefully provided with readingmaterial that matched their insti^ctional level

24

(i.e., 90% accuracy rates). Students not mak-ing adequate reading progress were providedadditional instmctional support beyond the 90-min reading block targeting deficient skillareas.

In each school, a Reading First mentor-coach worked closely with classroom teachersand school-based teams to support effectivereading instmction. Ongoing, high-qualityprofessional development was provided tosupport teachers and instmctional staff. Pro-fessional development included time forteachers to analyze student performance data,plan, and refine instmction.

Measures

DIBELS Oral Reading Fluency. TheDIBELS measure of ORF was developed fol-lowing procedures used in the development ofother CBM measures. DIBELS ORF measuresare 1-min fluency measures that take into ac-count accuracy and speed of reading con-nected text. The difficulty level of theDIBELS ORF passages was calibrated forgrade-level difficulty (Good & Kaminski,2002). In the standard administration protocol,students are administered three passages ateach of three benchmark assessment points dtir-ing the year (beginning, middle, and end of theyear) and the median score at each point is usedas the representative performance score. OnDIBELS ORF passages, altemate-form reli-ability drawn from the same level ranged from.89 to .94 and test-retest reliabilities for ele-mentary students ranged from .92 to .97 (Good& Kaminski, 2002). In the context of Oregonspecifically, the correlation between DIBELSORF passages administered in Grade 3 and theOregon State Reading Assessment administeredin Grade 3 was calculated with 364 students andwas .67 (Good, Simmons, & Kame'enui, 2001).

Test-retest reliability data have beencollected on school administration of DIBELSmeasures, including ORF, on two occasions.In the spring of the 2004-2005 school year,six schools were randomly selected and 20%of the students in kindergarten and first gradewere retested on all measures within 3 weeksof the original testing. The test-retest córrela-

Page 8: SPECIAL TOPIC - christyhiett.wiki.westga.educhristyhiett.wiki.westga.edu/file/view/Reading fluency as a... · SPECIAL TOPIC Reading Fluency as a ... mance on high-stakes reading tests

Reading Fluency as a Predictor of Reading Proficiency

tion forjORF was .98, with a range of .96-.99across the six schools. In the spring of the2005-2006 school year, eight schools wererandomly selected and approximately 20% ofthe students (n = 320) were retested on ORFmeasures in Grades 1 and 2. Mean test-retestreliabilities were .94 and .97 in Grades 1and 2, respectively. This includes both test-retest and interrater reliability.

Stanford Achievement Test—TenthEditioii (SAT-10). The SAT-10 (HarcourtAssessment, 2002) is a group-administered,norm-referenced test of overall reading profi-ciency.! The measure is not timed, althoughguidelines with flexible time recommenda-tions are given. Reliability and validity dataare strong. Kuder-Richardson reliability coef-ficients! for total reading scores were .97 atGrade 1 and .95 at Grade 2. The correlationsbetween the total reading score and the Otis-Lennon School Ability Test ranged from .61to .74. The normative sample is representativeof the U.S. student population.

All four of the SAT-10 subtests wereadministered at first grade: Word Study Skills,Word Reading, Sentence Reading, and Read-ing Coinprehension. This entire battery takesapproximately 155 min to complete. The sec-ond-grade version of the SAT-10 included thesubtests Word Study Skills, Reading Vocabu-lary, and Reading Comprehension. The entiretest takes approximately 110 min to complete.

Oregon Statewide Reading Assess-ment. ¡The Oregon Statewide Reading Assess-ment (ÖSRA) is an untimed, multiple-choicetest administered yearly to all students in Or-egon starting in third grade. Reading passagesinclude literary, informative, and practical se-lections. Seven individual subtests require stu-dents to (a) understand word meanings in thecontext of a selection; (b) locate informationin common resources; (c) answer literal, infer-ential, and evaluative comprehension ques-tions; (d) recognize common literary formssuch as novels, short stories, poetry, and folktales; and (e) analyze the use of literary ele-ments and devices such as plot, setting, per-sonification, and metaphor. The Oregon State

Department of Education reports that the cri-terion validity between the OSRA and theCalifornia Achievement Test was .75 and withthe Iowa Test of Basic Skills, it was .78 (Or-egon State Department of Education, 2005).The four alternate forms used for the OSRAdemonstrated an internal consistency reliabil-ity (Kuder-Richardson formula 20 coefficient)of .95 (Oregon State Department of Education,2000).

Data Collection Procedures

ORF measures were administered to stu-dents by school-based assessment teams in thefall, winter, and spring. Each assessment teamreceived a day of training on DIBELS admin-istration and scoring. In addition, a readingcoach at each school continued the assessmenttraining by conducting calibration practicesessions with assessment team members thatinvolved student participation. To maintainconsistency across testers, the coaches con-ducted individual checks with each assessmentteam member before data collection.

The SAT-10 and the OSRA were ad-ministered in the spring. The Reading Firstcoach supervised and monitored SAT-10 test-ing. Reading coaches at each school weretrained by the Oregon Reading First Center.Coaches provided additional training to allteaching staff in their building on test admin-istration and monitoring. Coaches observedtesting procedures using a fidelity implemen-tation checklist. Median fidelity on 18 testadministration questions was 98.3%. Third-grade students were administered the OSRAaccording to procedures established by theschool, district, and state. SAT-10 scoring wascompleted by the publisher, OSRA scoring byOregon Department of Education. Both orga-nizations have very strong internal structuresto ensure accurate data scoring.

Data Analysis

Growth curve analyses tested how wellORF trajectories, defined by their interceptsand slopes, predicted performance on SAT-10or OSRA administered at the end of Year 2(Li, Duncan, McAuley, Harmer, & Smol-

25

Page 9: SPECIAL TOPIC - christyhiett.wiki.westga.educhristyhiett.wiki.westga.edu/file/view/Reading fluency as a... · SPECIAL TOPIC Reading Fluency as a ... mance on high-stakes reading tests

School Psychology Review, 2008, Volume 37, No. 1

kowski, 2000; Singer & Willett, 2003). Rawscores were used in all analyses of ORF dataand scaled scores were used in analyses ofSAT-10 and OSRA tests. For the calculationof reading trajectories over time on ORF, weused growth curve analysis to derive predictedscores in terms of change per measurementpoint. Descriptions of the procedures forgrowth curve analysis and developing the pre-diction models follow.

Growth curve analysis. We first usedSAS PROC MIXED (SAS Institute, 2005) toconstruct a growth model of the repeated ORFassessments nested within individual students.The initial growth model determined the over-all growth trajectories from first to third gradeand included all four cohorts of students. Fortwo of the four cohorts, measures of ORFspan 2 years. For Cohort 3, there are measuresof ORF from the middle of Grade 1 to the endof Grade 2, and for Cohort 2 there are mea-sures of ORF from the beginning of Grade 2 tothe end of Grade 3. We did not expect lineargrowth across grades because of student sum-mer vacation. To account for this expectedshift in trajectories, we added two observa-tion-level effects that allowed a level changeat the beginning of second and third grades.These terms were added to the model to im-prove fit and not for substantive interpretation.

We adjusted within-year growth to ac-count for slightly greater ORF growth duringthe fall of second grade and a slight decline inORF growth during the middle and end ofthird grade. This represents an empiricallydriven pattem of growth, with greater acceler-ation in fluency reported in earlier grades(Fuchs et al., 1993; Hasbrouck & Tindal,1992, 2006). For the assessment at the middleof Grade 2, however, we added .2 to specify a20% increase in growth during the first half ofsecond grade. In the middle of third grade, wesubtracted 0.2 from the linear trajectory, andwe decreased the trajectory for the end of thirdgrade by 0.4. Thus, the slope, 7,-, (where / =each assessment occasion and j = each indi-vidual), was coded 0 in the middle of first,then 1.0,2.0,3.2,4.9,5.0,5.8, and 6.6 by the endof Grade 3 to model the expected growth pattem.

26

Prediction models. We next con-structed a set of models that predicted perfor-mance on comprehensive reading tests withstudent reading data available from 2 schoolyears. These models compared three predic-tors of the performance on the comprehensivereading test administered at the end of Year 2:the ORF intercept in Year 1, the ORF slopeacross 2 years, and the comprehensive readingtest score in Year 1. Because we used theintercept and slope as predictors, these modelswere fit with Mplus (Muthén & Muthén,1998-2004), a flexible and general statisticalsoftware package built from a structural equa-tion modeling framework.

For our prediction models, we split thesample into two groups, with one model forCohorts 1-3, modeled across Grades 2 and 3,and another model for Cohorts 2-4, modeledacross Grades 1 and 2. Figure 1 depicts thebest-fitting model for Grades 2 and 3. Thismodel shows the five observed ORF assess-ments that cut across Grades 1 and 2 (squares)and the ORF intercept and slope (circles). Themodel for grades 2 and 3 was similar in struc-ture, except that instead of five observed ORFassessments there were six (three per grade).

The relations among the constmcts ofmost interest are depicted in Figure 1: SAT-10in the spring of Grade 2 predicted by (a) theORF intercept, (b) the ORF slope, and (c)SAT-10 in the spring of Grade 1. This portionof the model has an interpretation similar to astandard regression analysis and represents thefocus of this study. To evaluate the competi-tion between predictors, we obtained standard-ized estimates of the regression coefficientsand the variance explained in the Grade 2SAT-10 from Mplus as the usual R^ value. Thecomplete model also assumes correlations be-tween the first-grade SAT-10, ORF intercept,and ORF slope, denoted by curved lines. TheGrade 2 intercept represents the level changeacross the summer, discussed above.

Model fit The fit of the models to theobserved data were tested with the compara-tive fit index (Bentler, 1990; Bollen & Long,1993; Marsh, 1995) and the Tucker-Lewis in-dex (Bollen, 1989; Tucker & Lewis, 1973).

Page 10: SPECIAL TOPIC - christyhiett.wiki.westga.educhristyhiett.wiki.westga.edu/file/view/Reading fluency as a... · SPECIAL TOPIC Reading Fluency as a ... mance on high-stakes reading tests

Reading Fluency as a Predictor of Reading Proficiency

Criterion values of .95 were chosen for both(Hu & Bentler, 1999). We reported the x^value, but its sensitivity to large samples ren-ders it too conservative as a measure of modelfit (Bentler, 1990). We also provided estimatesof the root mean square error of approximation(RMSEA). Values below .05 have been tradi-tionally been recommended to indicate accept-able fit, but recent research suggests the use ofmore relaxed criteria (Hu & Bentler, 1999)and has criticized "rigid adherence to fixedtarget values" (Steiger, 2000, p. 151). Thus,we adopted .10 as our RMSEA target value foracceptable fit.

We also used Akaike's information cri-terion (AlC), an index of relative fit amongcompeting models estimated with the samedata (AJcaike, 1974; Burnham & Anderson,

M«tn(MVaiiinn («i l

ORF ORFSV*

ORFF2~

ORF ORFS2°°

CR =0,973i

Figure 1. Growth model for ORFacross Grades 1 and 2 with ORF inter'cept, ORF slope, and Grade 1 SAT-10predicting SAT-10 in the spring ofGrade 2. SAT-10 = Stanford Achieve-ment Test—Tenth Edition; ORF =oral reading fluency; F = fall (begin-ning) assessment; W = winter (mid-dle) assessment; S = spring (end) as-sessment; CFI = comparative fit in-dex; TLI = Tucker-Lewis Index.

2002). The AIC was used to compare differentpredictor sets of the specific high-stakes testswithin the same pair of grades. For the predic-tion of Grade 2 SAT-10, one model includedthe predictors ORF intercept and ORF slope, asecond model included all three paths (ORFintercept, ORF slope, and Grade 1 SAT-10), athird path ORF slope and Grade 1 SAT-10,and so on. From the raw AIC value for eachmodel, which has little meaning on its own,we computed a A AIC value by subtracting theAIC for the best-fitting model from the AIC foreach other model. Thus, the best-fitting modelnecessarily has a A AIC of 0.0. Lower A AICvalues indicate more support. Values of 2.0 orbelow indicate competitive models, and valuesthat differ more than 10.0 irom the minimum areconsidered to have little support over the best-fitting model (Burnham & Anderson, 2002).

Results

Table 1 presents descriptive data forORF and the high-stakes reading tests. Aver-age performance on the SAT-10 correspondsprecisely to the 50th percentile in bothGrades 1 and 2. Average performance on theOSRA corresponds to the 37th percentile and40th percentiles in Year 1 and Year 2 of thestudy, respectively.

On ORF, within-year performance in-creased at each measurement point andacross years. From the end of one grade tothe next (e.g., end Grade 1 to beginningGrade 2), there is a consistent drop in per-formance. We attributed this drop to a sum-mer effect and the use of more difficult read-ing material as students move up in grade.Finally, mean performance in relation to tar-geted benchmark levels of performance aretypically slightly above or slightly below rec-ommendations (Good et al., 2001). In thespring of Grades 1-3, the recommendedbenchmarks are 40, 90, and 110 words readcorrectly per min (Good et al.).

Correlations Between ORF and High-Stakes Reading Measures

Thirteen correlations between ORF andhigh-stakes reading tests addressed our first

27

Page 11: SPECIAL TOPIC - christyhiett.wiki.westga.educhristyhiett.wiki.westga.edu/file/view/Reading fluency as a... · SPECIAL TOPIC Reading Fluency as a ... mance on high-stakes reading tests

School Psychology Review, 2008, Volume 37, No. 1

research objective. Grade 1 ORF correlated.72 in the winter and .82 in the spring with theGrade 1 SAT-10. For the Grade 2 SAT-10,correlations with the five ORF assessmentsfrom winter of Grade 1 through spring ofGrade 2 were .63, .72, .72, .79, and .80. SixORF assessments from fall of Grades 2through spring of Grade 3 correlated with theOSRA at .58, .63, .63, .65, .68, and .67. Thesecorrelations were consistent with previous re-search on the association between ORF andcriterion measures of reading performance(Marston, 1989; Shinn, 1998).

Growth on ORF to Predict Performanceon the Primary Reading Measure

To address our second objective, deter-mining how well growth over time on ORFadded to predictions of performance on thecomprehensive reading measures administeredat the end of Grades 2 (SAT-10) and 3(OSRA), we began by fitting an acceleratedlongitudinal growth model (Duncan, Duncan,Strycker, Li, & Alpert, 1999; Miyazaki &Raudenbush, 2000) for ORF across Grades1-3, with 11,829 students representing 38,164ORF assessments. We tested the relative fit ofseveral models with the AIC and chose thebest-fitting model for further analyses. Thebest-fitting growth model included parametersfor time and level adjustments for Grades 2and 3. These effects were allowed to vary forindividual students and to correlate with eachother. We compared the residual variance es-timate from this model to that from an uncon-ditional baseline model of the ORF assess-ments with no predictors to provide an esti-mate of the reduction in variation in ORFassessments accounted for by the growthmodel (Singer & Willett, 2003; Snijders &Bosker, 1999). The growth model reduced theORF residual variance from 1813.4 in thebaseline model to 86.9 in the full growthmodel. Thus, the small set of fixed and randomeffects in the growth model accountedfor 95.2% of the variance in ORF measuresacross time.

In predicting performance on the spe-cific high-stakes reading test, we conducted

two analyses, one with ORF data from firstand second grade (Cohorts 2-4), and the sec-ond with ORF data from second and thirdgrade (Cohorts 1-3). For each analysis, we fitsix competing models and estimated their rel-ative fit to the data. Each model used the samegrowth pattern of ORF described in the accel-erated longitudinal analysis, but a different setof predictors of the specific high-stakes read-ing test were used. For second grade, wepredicted performance on the SAT-10 totalscaled score administered at the end ofGrade 2 with the following six models: (a)ORF intercept, (b) ORF intercept and slope,(c) ORF intercept and slope and SAT-10 totalscaled from Grade 1, (d) the SAT-10 admin-istered at the end of Grade 1, (e) ORF slopeand the SAT-10 from Grade 1, and (f) ORFintercept and the SAT-10 from Grade 1. Forthird grade, the six models predicted theOSRA at the end of Grade 3 and entailed asimilar set of predictors.

The absolute fit indices, comparative fitindex and Tucker-Lewis index, were greaterthan .95 for every model. The x^ values wereall statistically significant, which is to be ex-pected given the large samples. All RMSEAvalues were below .10 and were adequate forthese prediction models.

First and Second Grade (Cohorts 2-4)

ORF intercept and slope predicted a sta-tistically significant portion of performance onthe Grade 2 SAT-10 (p < .0001). Together,ORF level and ORF slope explained 70% ofthe variance on the SAT-10 high-stakes read-ing test at the end of Grade 2. In addressingResearch Question 2, ORF slope accountedfor an additional 10% of the variance on theGrade 2 SAT-10, after controlling for initiallevel of performance. This represents a robustcontribution of slope in accounting for uniquevariance in the comprehensive readingmeasure.

Table 2 and Figure 1 give the results ofresearch question three. The growth modelusing all three predictors—ORF intercept andslope across Grades 1 and 2 and the Grade 1

28

Page 12: SPECIAL TOPIC - christyhiett.wiki.westga.educhristyhiett.wiki.westga.edu/file/view/Reading fluency as a... · SPECIAL TOPIC Reading Fluency as a ... mance on high-stakes reading tests

Reading Fluency as a Predictor of Reading Proficiency

Table 2Grade 2 SAT-10 Score Predicted by Grade 1 SAT-10 Score and ORF

Intercept and Slope Across Grades 1 and 2

Fixed effects 2nd SAT-10 interceptORF interceptORF slope1st SAT-10

R^ 2nd SAT-10Means ORF intercept

ORF slopeORF Gr 2 change1st SAT-10

Variances 2nd SAT-10 residualORF interceptORF slopeORF Gr 2 changeORF residual1st SAT-10

Goodness of fit x^ (df)CFITLI

¡ RMSEA (95% CI)

RawEstimate

316.85.42

1.86.41.76

19.0421.32

-25.58533.55491.78728.32

74.5480.3874.80

2121.541095.2

0.9730.9620.091

StandardError

(10.88)(.03)(.08)(.02)

(.33)(.13)(.31)(.57)

(14.18)(13.81)

(2.26)(9.76)(1.21)

(39.03)(15.00)

(.087, .096)

StdEstimate

.25

.36

.42

t

Value

29.1313.1422.6917.51

58.04161.19

-83.10943.79

34.6952.7532.97

8.2461.6854.35

PValue

<.OOO1<.OOO1<.OOO1<.OOO1

<.OOO1<.OOO1<.OOO1<.OOO1<.OOO1<.OOO1<.OOO1<.OOO1<.OOO1<.OOO1<.OOO1

Note. Std = standardized; SAT-10 = Stanford Achievement Test—Tenth Edition; ORF = oral reading fluency; Gr =grade; CFI = comparative fit index; TLI = Tucker-Lewis index; RMSEA = root mean square error of approximation;CI = confidence interval.

SAT-10—to predict performance on theSAT-10 at the end of Grade 2 fit the data best(A AIC = 0.0). Together, these three predic-tors explained 76% of the variance in SAT-10performance at the end of Grade 2. The stan-dardized estimates show that the first-gradeSAT-10 predicts best, but ORF slope is also astrong predictor. The ORF intercept makes astatistically significant but smaller contribu-tion, partly because it was highly correlatedwith performance on the SAT-10 in first grade(see Table 2).

Second and Third Grade (Cohorts 1-3)

In second and third grade, ORF interceptand slope also predicted a significant portionof performance on the third-grade OSRA {p <.0001).'Together, ORF intercept and slope ac-counted for 52% of the variance on the OSRA.

In addressing Research Question 2, slope onORF contributed an additional 3% to predic-tion accuracy, which although statistically sig-nificant, represents a small unique contribu-tion for slope.

Regarding the third research question insecond and third grade, the best-fitting modelalso included all three predictors—ORF inter-cept and slope and the second-grade SAT-10(A AIC = 0.0). No other models fit the datawell. Together, these three predictors ac-counted for 59% of the variance in the OSRAat the end of third grade. The standardizedpath weights, shown in Table 3, indicate thatmost of the variance was predicted by theSAT-10 in Grade 2, and the ORF interceptpredicted more variance than ORF slope. Thereduced influence of slope is not unexpectedbecause of the high correlations between

29

Page 13: SPECIAL TOPIC - christyhiett.wiki.westga.educhristyhiett.wiki.westga.edu/file/view/Reading fluency as a... · SPECIAL TOPIC Reading Fluency as a ... mance on high-stakes reading tests

School Psychology Review, 2008, Volume 37, No. 1

Table 3Grade 3 OSRA Score Predicted by Grade 2 SAT-IO Score and ORF

Intercept and Slope Across Grades 2 and 3

Fixed effects

/?^Means

Variances

Goodness of fit

OSRA interceptORF interceptORF slope2nd SAT-10OSRAORF interceptORF slopeORF Gr 3 change2nd SAT-10OSRA residualORF interceptORF slopeORF Gr 3 changeORF residual2nd SAT-10X'(df)CFITLIRMSEA (95% CI)

RawEstimate

125.39.07.25.13.59

30.9221.53

-33.65576.3953.35

1071.1953.33

190.6588.28

2002.721743.6

0.9650.9590.091

StandardError

(3.66)(.01)(.03)(.01)

(.39)(.12)(.34)(.56)

(1.53)(20.40)

(1.79)(13.61)

(1.24)(39.12)(24.00)

(.088, .095)

StdEstimate

.21

.16

.51

t

Value

34.318.318.17

18.43

78.91184.93

-100.221027.72

34.9252.5229.7414.0171.1651.20

PValue

<.OOO1<.OOO1<.OOO1

<.OOO1<.OOO1<.OOO1<.OOO1<.OOO1<.OOO1<.OOO1<.OOO1<.OOO1<.OOO1<.OOO1

Note. Std = standardized; SAT-10 = Stanford Achievement Test—Tenth Edition; ORF = oral reading fluency;OSRA = Oregon State Reading Assessment; Gr = grade; CFI = comparative fit index; TLI = Tucker-Lewis index;RMSEA = root mean square error of approximation; CI = confidence interval.

SAT-10 and ORF intercept (r = .78) and slope(r = .50), and the correlation between ORFintercept and slope (r = .29).

In summary, the best-fitting model inGrades 1 and 2 and the best-fitting model inGrades 2 and 3 included the same set ofpredictors: ORF intercept and slope and thehigh-stakes reading measure in Year 1. Inboth models, ORF slope accounted for astatistically significant amount of the vari-ance in predicting the high-stakes measure.In first and second grade, the contribution ofslope was greater than in second and thirdgrade.

Discussion

In Grades 1-3, ORF was associated withperformance on the SAT-10 high-stakes test inGrade 2 and the OSRA high-stakes test in

Grade 3. Correlations ranged from .58 to .82,with most correlations between .60 and .80.This supports previous research on the associ-ation of ORF with commercially availablestandardized tests (Marston, 1989) as well asmore recent research that has examined corre-lations between ORF and states' reading tests(e.g., Barger, 2003; McGlinchey & Hixson,2004; Shaw & Shaw, 2002; Vander Meer etal., 2005; Wilson, 2005). It also extends pre-vious research by demonstrating positive cor-relations between ORF and criterion measuresof reading in Grades 1 and 2.

The most important finding in this studywas that ORF slope added to the accuracy ofpredicting performance on specific high-stakestests in Year 2, above information provided bylevel of performance alone. The added valueof slope occurred even when predictors in-

30

Page 14: SPECIAL TOPIC - christyhiett.wiki.westga.educhristyhiett.wiki.westga.edu/file/view/Reading fluency as a... · SPECIAL TOPIC Reading Fluency as a ... mance on high-stakes reading tests

Reading Fluency as a Predictor of Reading Proficiency

eluded two high-quality measures that pro-vided unique information about level of read-ing performance, ORF intercept and perfor-mance on a specific high-stakes reading test inYear 1. On the SAT-10 high-stakes readingtest in Grade 2, slope added an additional 10%to prediction accuracy, and on the Grade 3OSRA high-stakes test, the added contributionof slop^ was 3%.

Together, ORF level and ORF slope ex-plained 70% of the variance on the Grade 2SAT-1Ó. When the Grade 1 SAT-10 wasadded to ORF level and slope data, predictionaccuracy accounted for 76% of the variance onthe Grade 2 SAT-10. Although this repre-sented a statistically significant improvementin prediction accuracy, a reasonable questionis whether an improvement of 6% predictionaccuracy is worth the cost and time associatedwith the yearly administration of high-stakestests as a way to help predict future readingachievement.

On the Grade 3 high-stakes readingmeasure, ORF level and slope accounted for51% of the variance in the OSRA. The best-fitting model included the Grade 2 SAT-10with ORF intercept and ORF slope, and ac-counted for 59% of the variance in the thirdgrade OSRA scores. This finding shows thatthe best-fitting model also accounted for sig-nificantly less of the variance in the high-stakes reading test than the Grade 2 fully spec-ified niodel (i.e., ORF level, slope, and priorSAT-1Ö achievement in Grade 1).

One explanation why ORF might pro-vide a stronger index of overall reading profi-ciency ¡in Grade 2 than Grade 3 is that thenature of reading development may be differ-ent in the two grades, and the ability of read-ing fiuency to provide an overall index ofreading proficiency may diminish over thisperiod of time. Although there is some evi-dence that correlations between ORF andoverall reading performance decrease overtime (Shinn, 1998; Espin & Tindal, 1998),these changes are typically more apparentwhen the grade difference is larger than 1year. Also, previous studies have reportedlarge correlations between ORF and criterion

measures of reading proficiency in Grade 3(Marston, 1989).

The use of a different high-stakes read-ing measure in third grade may have contrib-uted to the R^ reduction in predicting Grade 3performance. In Grades 1 and 2, the SAT-10was administered as the high-stakes measureand in Grade 3 it was the OSRA. The OSRAmay measure different aspects of overall read-ing performance than the SAT-10, or it may beless reliable, thereby attenuating the associa-tion. As reviewed in the introduction, correla-tions between ORF and state reading testsrange from .50 (Stage & Jacobsen, 2001) to.80 (Shaw & Shaw, 2002). McGlinchey andHixson (2004), for example, found correla-tions between ORF and the Michigan statereading test to be similar to the correlations wereport in this study. Stage and Jacobsen (2001)found the lowest correlations between ORFand a state test, the WASL, but they suspectedthat the low correlations were attributable tothe use of written answers and extended re-sponse items on the WASL.

Regarding the OSRA, there is some ev-idence that this instrument is sound psycho-metrically when correlations between thismeasure and commercially available measuresare examined. For example, the correlationbetween the OSRA and the CaliforniaAchievement Test was .75 and the correlationbetween the OSRA and the Iowa Test of BasicSkills was .78 (Oregon State Department ofEducation, 2005). The internal consistency ofthe OSRA seems very strong. Four alternateforms of the OSRA demonstrated an internalconsistency reliability of .95 (Oregon StateDepartment of Education, 2000). In the cur-rent study, correlations between ORF andOSRA were largely in the .60 range. An im-portant area of further research would be toinvestigate the technical aspects of state read-ing tests because there are many different testsbeing used by states to determine reading pro-ficiency as part of No Child Left Behind.

The Importance of Growth Over Time

Practical applications of ORF growthdata are extensive. For example, a standard

31

Page 15: SPECIAL TOPIC - christyhiett.wiki.westga.educhristyhiett.wiki.westga.edu/file/view/Reading fluency as a... · SPECIAL TOPIC Reading Fluency as a ... mance on high-stakes reading tests

School Psychology Review, 2008, Volume 37, No. 1

recommendation in Reading First schools isthat ORF be administered once or twice amonth for students at risk of reading difficulty.Although the current study investigatedgrowth when ORF was administered up tothee times per year, rather than every otherweek or monthly as is commonly recom-mended in progress-monitoring assessments,there is no reason to believe slope estimatesgenerated using three times per year assess-ments versus more regular assessments wouldbe substantially different. In fact, if ORF is adirect measure of reading fluency at any pointin time and a moderate to strong gauge ofoverall reading proficiency, then slope esti-mates using benchmark data (e.g., three mea-surement probes, three times per year) shouldbe highly correlated with progress monitoringdata (single measurements biweekly ormonthly).

The important point is that regular mon-itoring of ORF in the early grades providesdata to estimate slope, and this study showsthat slope is related to performance on com-prehensive measures of reading, controllingfor initial level of performance. Although reg-ular monitoring of student progress on ORFhas long been recommended (e.g., Shinn,1989), no studies we are aware of have exam-ined whether growth on ORF progress-moni-toring data are related to performance on high-stakes measures of reading performance. Fu-ture studies should investigate the associationbetween ORF slope, when administration isbiweekly or monthly, and overall performanceon specific high-stakes reading tests.

Methodological Considerations

We believe this study provides a poten-tially useful methodology for determining thevalue of slope on CBM-type measures. Thereare three important considerations. The first isindexing growth in relation to criterion mea-sures of performance. Only a handful of stud-ies have examined slope on ORF, and most ofthese suggest that steeper slopes are desirable(Fuchs et al. 1993; Speece & Ritchey, 2005).On a measure like ORF, there is inherent jus-tification for attempts to increase slope. De-

32

veloping reading fluency is an important goalon its own (National Reading Panel, 2000),and the fact that reading fluency is also asso-ciated with overall reading proficiency is anadded benefit. This study indicates that in-creasing slope on ORF is likely to lead tobetter performance on comprehensive mea-sures of reading.

A second and related consideration is toexamine slope in the context of initial level ofperformance and without confounds. Stageand Jacobsen (2001) estimated ORF slopesand then calculated three levels of perfor-mance across the school year. In their regres-sion model to estimate the contribution ofslope, the slope and level data were not inde-pendent, likely leading to severe multicol-hnearity. In another study, Fuchs et al. (2004)considered slope independent of initial level ofperformance. That is, slopes were evaluatedregardless of initial starting point, which leadsto interpretation difficulties if intercepts cor-relate with slopes, a common occurrence inreading. We examined slope in the context ofinitial level of performance to better interpretits value within Reading First, where childrenwho perform low on measures of reading flu-ency in the fall of second grade, for example,would be candidates for frequent progressmonitoring and small-group instruction. Slopegoals would be based on fall performance, andwould be considered a more urgent objectivefor students who scored low in the fall. That is,one measure of intervention effectiveness for astudent low in the fall on ORF would be toattain a slope that exceeded the slopes of otherstudents who started high in the fall. In thisway, the student would begin to catch up tothe overall performance level of other stu-dents. To accurately interpret slopes, it is help-ful to equate or control for the starting point.

The third methodological considerationinvolves the value of ORF level and slopetogether as a prediction package. Compared toa model that included another strong predictor,performance on a specific high-stakes readingtest in Year 1, we were able to show that themodel with just ORF level and slope did verywell. On both second- and third-grade high-stakes reading tests administered in Year 2,

Page 16: SPECIAL TOPIC - christyhiett.wiki.westga.educhristyhiett.wiki.westga.edu/file/view/Reading fluency as a... · SPECIAL TOPIC Reading Fluency as a ... mance on high-stakes reading tests

Reading Fluency as a Predictor of Reading Proficiency

however, the strongest model included ORFlevel and slope and performance on a specifichigh-stakes test in Year 1.

By demonstrating the value of a modelwith multiple predictors, we are not suggest-ing that schools should try to use this assess-ment approach. We beheve a strong case canbe made from this study that ORF level andslope can be used to estimate how well stu-dents are doing in terms of overall readingdevelopment. It is valuable to know that amodel with ORF level and slope accounts formost of the variance in predicting an outcomeon a specific high-stakes test, even when otherinformation about performance on previoushigh-siakes tests is available. It also seemsreasonable that some schools might concludethat because of the additional value added byhigh-stakes test, they should be part of a com-prehensive assessment framework, along withORF level and slope. If schools have the re-sources, it might be useful to administer acomprehensive measure of reading perfor-mance] before Grade 3.

I

Implications for School Psychologists

We believe this study has implicationsfor school psychologists. School psychologistsare highly qualified to help districts and schoolset up assessment systems targeting studentreading performance. Determining whichmeasures to administer, selecting a combina-tion of measures that provide complimentary,unique information, and understanding differ-ences between level of performance and slope,are important and complex tasks. Recent fed-eral initiatives, such as Reading First and Re-sponse to Intervention, are pushing strongly inthe direction of school-wide data collectionand décision making (No Child Left BehindAct, 2002; Individuals With Disabilities Edu-cation ¡Improvement Act, 2004). Schools areexpected to have the technical knowledge touse and interpret various assessment measuresfor different purposes. School psychologistscan assist schools in analyzing screening andgrowtli data, for example, to determine if in-terventions are working for individual students

and for groups of students (Shinn, Shinn,Hamilton, & Clarke, 2002).

Response to intervention provides an al-ternative for the identification of learning dis-abilities (Fuchs & Fuchs, 1998; Vaughn,Linan-Thompson, & Hickman, 2003; Individ-uals With Disabilities Education ImprovementAct, 2004). School psychologists will be ableto provide substantial assistance to schools toset up systems where the accurate measure-ment of learning over time takes place, and todetermine whether students have received ap-propriate instruction that would allow them tomake sufficient progress in meeting key learn-ing objectives. In this study, level and slope ofperformance on ORF accounted for over 95%of the variance of ORF assessments, demon-strating that reliable growth estimates can beestablished in the early grades.

A centerpiece of the closer integration ofgeneral and special education services (Ger-sten & Dimino, 2006; Fuchs & Fuchs, 2006)will likely be the way schools measure studentprogress and determine whether the progress astudent demonstrates in response to a specificintervention is sufficient. A protracted courseof poor student progress in response to well-delivered, research-based interventions mayconstitute a learning disability. Under theseconditions, the stakes involved in monitoringstudent progress and in defining adequate re-sponse to intervention are significant, andschool psychologists will be expected to en-sure that the approaches used in this processare valid (Messick, 1989; Gersten & Dimino,2006; Fuchs & Fuchs, 2006).

An extension of this practice providesan opportunity for school psychologists to in-vestigate important patterns in data such asORF. For example, if only a few students outof many display poor reading growth in theface of what is expected to be a strong readingintervention, the implications drawn might fo-cus exclusively on making adjustments in thereading interventions for the specific studentsexperiencing low growth. If many studentsdisplay problematic growth in the context ofwhat is thought to be strong reading interven-tions, it may indicate that the source of theproblem lies beyond the individual students.

33

Page 17: SPECIAL TOPIC - christyhiett.wiki.westga.educhristyhiett.wiki.westga.edu/file/view/Reading fluency as a... · SPECIAL TOPIC Reading Fluency as a ... mance on high-stakes reading tests

School Psychology Review, 2008, Volume 37, No. 1

In this case, poor reading growth may signalthe need to examine the overall system inwhich reading instruction is provided while atthe same time probing for immediate solutionsfor the problematic reading growth of individ-ual students. This type of problem solving atboth the systems level and the individual stu-dent level (Batsche et al., 2005; Tilly, 2008) iscentral to response to intervention (IndividualsWith Disabilities Education Improvement Act,2004).

Context and Limitations of the Study

The context in which the study was con-ducted is important in three ways. First, thestudy was conducted in real school settings inwhich ORF data were used to screen students,monitor progress, and adjust instruction tomeet students' needs. Second, all of the Read-ing First schools in this study provided highlyspecified reading instruction. Consequently, agreat deal is known about the instructionalconditions in which student reading perfor-mance and growth occurred. Third, a largenumber schools and students participated, in-creasing external validity.

This study included all students in alarge-scale reading reform. It did not focus ona subset of students, such as students in specialeducation or students at risk for reading fail-ure. Future research should investigate rela-tions between ORF and high-stakes tests withspecific student populations, and in gradesother than 1-3. Participation in Reading Firstis based on high poverty rates and low readingachievement. These findings are likely com-parable to schools not in the Reading Firstprogram, but that is currently unknown.

Another important issue is the inabilityto investigate the cause of the stronger predic-tion of ORF in Grade 2 versus Grade 3. It isimpossible to test the hypotheses in which theattenuation in variance accounted for (76% insecond-grade outcomes versus 59% in thirdgrade) is an artifact of the different high-stakesmeasures used in Grades 2 and 3 (SAT-10 infirst and second grade and OSRA in thirdgrade) or is attributable to developmental dif-ferences of ORF trajectories.

A final issue addresses the accelerationlongitudinal growth model design. A moreaccurate picture of fluency development mayemerge by following the same cohort of stu-dents from first grade through third grade. Inother words, a cohort effect may also be ac-counting for some of the differences in thepredictive power of ORF over time.

Conclusions

We believe the findings of this studysupport the use of ORF in the context ofreading initiatives such as Reading First. Inparticular, ORF can be part of comprehensiveassessment systems that schools develop forthe purpose of making a range of decisionsabout students' reading. Schools are expectedto identify as soon as possible students whomay have or may develop reading problems,and beginning in first grade ORF can providevaluable information regarding who is ontrack for successful reading achievement andwho is struggling. Also, the growth studentsmake on ORF over time can be used to gaugehow well students are developing reading flu-ency skills as well as other skills that are partof overall reading proficiency.

References

Adams, M. J. (1990). Beginning to read: Thinking andleaming about print. Cambridge, MA: MIT Press.

Akaike, H. (1974). A new look at the statistical modelidentiñcation. IEEE Transactions on Automatic Con-trol, 19, 716-723.

Barger, J. (2003). Comparing the DIBELS Oral ReadingFluency indicator and the North Carolina end of gradereading assessment (Technical Report). Ashville, NC:North Carolina Teacher Academy.

Batsche, G., Elliott, J., Graden, J., Grimes, J., Kovaleski,J., Prasse, D., et al. (2005). IDEA'O4. Response tointervention: Policy considerations and implementa-tion. Alexandria, VA: National Association of StateDirectors of Special Education (U.S.).

Bentler, P. (1990). Comparative fit indexes in structuralmodels. Psychological Bulletin, 107, 238-246.

Bollen, K. A. (1989). Structural equations with latentvariables. New York: John Wiley & Sons.

Bollen, K. A., & Long, J. S. (1993). Testing structuralequation models. Newbury Park: Sage Publications.

Bumham, K. P., & Anderson, D. R. (2002). Model selec-tion and multimodel inference: A practical informa-tion-theoretic approach (2nd ed.). New York: Springer-Verlag.

Cohen, J., Cohen, P., West, S. G., & Aiken. L. S. (2003).Applied multiple regression/correlation analysis for

34

Page 18: SPECIAL TOPIC - christyhiett.wiki.westga.educhristyhiett.wiki.westga.edu/file/view/Reading fluency as a... · SPECIAL TOPIC Reading Fluency as a ... mance on high-stakes reading tests

Reading Fluency as a Predictor of Reading Proficiency

the behavioral sciences (3rd ed.). Mahwah, NJ: Law-rence Erlbaum Associates.

Deno, S. • (1985). Curriculum-based measurement: Theemerging altemative. Exceptional Children, 52, 219-232.

Deno, S., Fuchs, L., Marston, D., & Shin, J. (2001). Usingcurriculum-based measurement to establish growthstandards for students with leaming disabilities. SchoolPsychology Review, 30. 507-524.

Deno, S., Marston, D., Mirkin, P., Lowry, L., Sindelar, P.,Jenkins, J., et al. (1982). The use of standard tasks tomeasure achievement in reading, spelling, and writtenexpression: A normative and developmental study (No.IRLD-RR-87). Minneapolis, MN: IRLD.

Deno, S., '& Mirkin, P. (1977). Data-based program mod-ification: A manual. Minneapolis, MN: LeadershipTraining Institute for Special Education.

Deno, S. L., Mirkin, P. K., & Chiang, B. (1982). Identi-fying valid measures of reading. Exceptional Chil-dren, 49, 36-45.

Duncan, T., Duncan, S., Strycker, L., Li, F., & Alpert, A.(1999). An introduction to latent variable growth curvemodeling: Concepts, issues, and applications. Mah-wah, NJ: Lawrence Erlbaum Associates.

Espin, C , & Tindal, G. (1998). Curriculum-based mea-surement for secondary students. In M. Shinn (Ed.),Advanced applications of curricuium-based measure-ment (pp. 214-253). New York: Guilford Press.

Fuchs, L. S., & Fuchs, D. (1998). Treatment validity: Aunifying concept for reconceptualizing the identifica-tion of leaming disabilities. Leaming Disabilities Re-search and Practice, 13, 204-219.

Fuchs, L., & Fuchs, D. (2002). Curriculum-based mea-surement: Describing competence, enhancing out-comes, evaluating treatment effects, and identifyingtreatment nonresponders. Peabody Joumal of Educa-tion, 77, 64-84.

Fuchs, D., & Fuchs, L. S. (2006). Current issues in specialeducation and reading instruction—Introduction to re-sponse to intervention: What, why, and how valid is it?Readihg Research Quarterly, 41(\), 92.

Fuchs, LJ S., Fuchs, D., & Compton, D. L. (2004). Mon-itoring early reading development in first grade: Wordidentification fluency versus nonsense word fluency.Exceptional Children, 71(\), 1.

Fuchs, L.¡, Fuchs, D., Hamlett, C , Walz, L., & Germann,G. (1993). Formative evaluation of academic progress:How riiuch growth can we expect? School PsychologyReview, 22, 27-48.

Fuchs, L.', Fuchs, D., & Maxwell, L. (1988). The validityof informal reading comprehension measures. Reme-dial and Special Education (RASE), 9(2), 20-28.

Gersten, R., & Dimino, J. A. (2006). RTI (response tointervention): Rethinking special education for stu-dents with reading difficulties (yet again). ReadingResearch Quarterly, 41{l), 92.

Good, R., & Kaminski, R. (2002). DIBELS oral readingfluency passages for first through third grades (Tech-nical Report No. 10). Eugene: University of Oregon.

Good, R., Simmons, D., & Kame'enui, E. (2001). Theimportance and decision-making utility of a continuumof fluency-based indicators of foundational readingskills for third-grade high-stakes outcomes. ScientificStudies of Reading, 5, 257-288.

Harcourt Assessment, Inc. (2002). Stanford AchievementTest [SAT-IO]. San Antonio, TX: Author.

Hasbrouck, J., & Tindal, G. (1992). Curriculum-basedoral reading fiuency norms for students in Grades 2through 5. Teaching Exceptional Children, 24(2), 4 1 -44.

Hasbrouck, J., & Tindal, G. (2006). Oral reading fluencynorms: A valuable assessment tool for reading teach-ers. The Reading Teacher, 59, 636-646.

Hu, L., & Bentler, P. (1999). Cutoff criteria for fit indexesin covariance structure analysis: Conventional criteriaversus new altematives. Structural Equation Modeling,6(1), 1-55.

Individuals With Disabilities Education Improvement Actof 2004, Pub. L. 108-466 § 614, Stat. 2706 (2004).

Jenkins, J., Fuchs, L., van den Broek, P., Espin, C , &Deno, S. (2003). Sources of individual differences inreading comprehension and reading fiuency. Joumal ofEducational Psychology, 95, 719-729.

Kaminski, R., & Good, R. (1996). Toward a technologyfor assessing basic early literacy skills. School Psy-chology Review, 25, 215-227.

LaBerge, D., & Samuels, S. (1974). Toward a theory ofautomatic information processing in reading. CognitivePsychology, 6, 293-323.

Li, F., Duncan, T., McAuley, E., Harmer, P., &Smolkowski, K. (2000). A didactic example of latentcurve analysis applicable to the study of aging. Joumalof Aging and Health, 12, 388-425.

Little, R. J. A., & Rubin, D. B. (2002). Statistical analysiswith missing data (2nd ed.). New York: John Wiley &Sons.

Marsh, H. (1995). The [Delta]2 and [Chi-Square]I2 fitindices for structural equation models: A brief note ofclarification. Structural Equation Modeling, 2(3), 246

Marston, D. (1989). Curriculum-based measurement:What is it and why do it? In M. R. Shinn (Ed.),Curriculum-based measurement: Assessing specialchildren (pp. 18-78). New York: Guilford Press.

McGlinchey, M. T., & Hixson, M. D. (2004). Contempo-rary research on curriculum-based measurement: Us-ing curriculum-based measurement to predict perfor-mance on state assessments in reading. School Psy-chology Review, i i(2), 193-204.

Messick, S. (1989). Validity. In R. L. Linn (Ed.), Educa-tional measurement (3rd ed., pp. 13-103). New York:Macmillan.

Miyazaki, Y., & Raudenbush, S. (2000). Tests for linkageof multiple cohorts in an accelerated longitudinal de-sign. Psychological Methods, 5(1), 44-63.

Muthén, L., & Muthén, B. (1998-2004). Mplus: User'sguide (3rd ed.). Los Angeles, CA: Author.

National Reading Panel. (2000). Teaching children toread: An evidence-based assessment of the scientificresearch literature on reading and its implications forreading instruction (NIH Pub. No. 00-4769). Wash-ington, DC: National Institute of Child Health andHuman Development.

No Child Left Behind Act. Pub. L. No. 107-110, § 1111,Stat. 1446 (2002).

Oregon State Department of Education. (2000). Reportcards—school and district. Salem: Oregon Departmentof Education.

Oregon State Department of Education. (2005). Closingthe achievement gap: Oregon's plan for Success for AllStudents. Salem: Oregon State Department of Educa-

35

Page 19: SPECIAL TOPIC - christyhiett.wiki.westga.educhristyhiett.wiki.westga.edu/file/view/Reading fluency as a... · SPECIAL TOPIC Reading Fluency as a ... mance on high-stakes reading tests

School Psychology Review, 2008, Volume 37, No. 1

Posner, M., & Snyder, C. (1975). Attention and cognitivecontrol. In R. Solso (Ed.), Information processing andcognition: The Loyola Symposium (pp. 55-85). Hills-dale, NJ.: Erlbaum.

SAS Institute. (2005). SAS OnlineDoc®9.L3, SAS/STAT9user's guide. Cary, NC: Author. Retrieved Septem-ber 20, 2006, from SAS OnlineDoc®9.1.3 Web site:http://9doc.sas.com/sasdoc/

Schilling, S. G., Carlisle, J. F., Scott, S. E., & Zeng, J.(2007). Are fluency measures accurate predictors ofreading achievement? The Elementary School Journal,107(5), 429-448.

Shaw, R., & Shaw, D. (2002). DIBELS Oral ReadingHuency-Based Indicators of the third-grade readingskills for Colorado State Assessment Program (CSAP)(Technical Report). Eugene, OR: University of Ore-gon.

Shinn, M. (1989). Curriculum-based measurement: As-sessing special children. New York: Guilford Press.

Shinn, M. (1998). Advanced applications of curriculum-based measurement. New York: Guildford Press.

Shinn, M., & Bamonto, S. (1998). Advanced applicationsof curriculum-based measurement: "Big ideas" andavoiding confusion. In M. R. Shinn (Ed.), Advancedapplications of curriculum-based measurement (pp.1-31). New York: Guildford Press.

Shinn, M., Good, R., Knutson, N., Tilly, W., & Collins, A.(1992). Curriculum-based measurement of oral readingfluency: A conñrmatory analysis of its relation to read-ing. School Psychology Review, 21, 459-479.

Shinn, M. R., Shinn, M. M., Hamilton, C , & Clarke, B.(2002). Using curriculum-based measurement in gen-eral education classrooms to promote reading success.In M. R. Shinn, H. M. Walker, & G. Stoner (Eds.),Interventions for academic and behavior problems II:Prevention and remedial approaches (pp. 113-142).Bethesda, MD: National Association of School Psy-chologists.

Singer, J., & Willett, J. (2003). Applied longitudinal dataanalysis: Modeling change and event occurrence. NewYork: Oxford University Press.

Snijders, T., & Bosker, R. (1999). Multilevel analysis: Anintroduction to basic and advanced multilevel model-ing. London: Sage.

Speece, D., & Ritchey, K. D. (2005). A longitudinal studyof the development of oral reading fluency in youngchildren at risk for reading failure. Journal of LearningDisabilities, 38, 387-399.

Stage, S. A., & Jacobsen, M. D. (2001). Predicting studentsuccess on a state-mandated performance-based as-sessment using oral reading fluency. School Psychol-ogy Review, 30(3), 407-420.

Stanovich, K. (1980). Toward an interactive-compensa-tory model of individual differences in the develop-ment of reading fluency. Reading Research Quarterly,¡6(1), 32-71.

Stanovich, K. (2000). Progress in understanding reading:Scientific foundations and new frontiers. New York:Guilford Press.

Steiger, J. (2000). Point estimation, hypothesis testing,and interval estimation using the RMSEA: Some com-ments and a reply to Hayduk and Glaser. StructuralEquation Modeling, 7, 149-162.

Tilly, D. (2008). The evolution of school psychology toscience based practice. In A. Thomas & J. Grimes(Eds.), Best practices in school psychology V (pp.17-36). Bethesda, MD: National Association of SchoolPsychologists.

Tucker, L., & Uwis, C. (1973). A reliability coefflcientfor maximum likelihood factor analysis. Psy-chometrika, 38(\), 1-10.

Vander Meer, C. D., Lentz, F. E., & Stollar, S. (2005).The relationship between oral reading fluency andOhio proflciency testing in reading (Technical Report).Eugene, OR: University of Oregon.

Vaughn, S., Linan-Thompson, S., & Hickman, P. (2003).Response to instruction as a means of identifying stu-dents with reading/learning disabilities. ExceptionalChildren, 69(4), 391-411.

Wilson, J. (2005). The relationship of Dynamic Indicatorsof Basic Early Literacy Skills (DIBELS) Oral ReadingFluency to performance on Arizona Instrument toMeasure Standards (AIMS). Tempe, AZ: TempeSchool District No. 3.

Wood, D. E. (2006). Modeling the relationship betweenOral Reading Fluency and performance on a statewidereading test. Educational Assessment, 11(2), 85-104.

Woodcock, R., & Johnson, M. (1989). Woodcock-John-son tests of achievement (rev. ed.). Allen, TX: DLMTeaching Resources.

Date Received: August 21, 2006Date Accepted: July 9, 2007

Action Editor: Sandra Chafouleas

36

Page 20: SPECIAL TOPIC - christyhiett.wiki.westga.educhristyhiett.wiki.westga.edu/file/view/Reading fluency as a... · SPECIAL TOPIC Reading Fluency as a ... mance on high-stakes reading tests

Reading Fluency as a Predictor of Reading Proficiency

Scott K. Baker, PhD, is Director of Pacific Institutes for Research, His research interestsare in literacy and mathematics interventions and the instructional needs of Englishlanguage leamers.

jKeith Smolkowski, PhD, is Associate Scientist and Research Analyst at Oregon Research¡Institute and Research Methodologist at Abacus Research, LLC. His professional workinvolves research on early literacy instruction, CBM, child and adolescent social behavior,and teacher and parent behavior management practices. His methodological work hasfocused on the design and analysis of group-randomized trials and the statistical modelingof longitudinal, multilevel data,

Rachell Katz, PhD, is Regional Coordinator for the Oregon Reading First Center, Herresearch interests include implementation of school-wide literacy programs, Englishlanguage leamers, and early intervention,

iHank Fien received his Ph.D, from the University of Oregon in 2004, He is currentlyResearch Associate at the Center for Teaching and Leaming, where he serves as PrincipalInvestigator of an Institute of Education Sciences grant evaluating the impact of aread-loud curriculum on student's vocabulary acquisition and oral retell skills. Hisresearch interests include using formative assessments to guide instructional decisionmaking and empirically validating interventions aimed at preventing or amelioratingstudent academic problems,

John R, Seeley, PhD, is Research Scientist at the Oregon Research Institute and AbacusResearch, LLC, His areas of interest include early intervention, serious emotional-behavioral problems, screening methodology, and advanced statistical modeling of lon-gitudinal and multilevel data.

Edward J, Kame'enui, PhD, is Knight Professor of Education and Director of the InstituteI for the Development of Educational Achievement and the Center on Teaching andj Leaming in the College of Education at the University of Oregon. His areas of interest¡include the prevention of reading failure, school-wide implementation of beginning¡reading instruction, and the design and delivery of effective teaching and assessment! strategies and systems,I

'Carrie Thomas Beck, PhD, is Research Associate at the University of Oregon, Herresearch and teaching interests are in the areas of early literacy, vocabulary instruction,and instructional design.

37

Page 21: SPECIAL TOPIC - christyhiett.wiki.westga.educhristyhiett.wiki.westga.edu/file/view/Reading fluency as a... · SPECIAL TOPIC Reading Fluency as a ... mance on high-stakes reading tests