Quantifying Sources of Bias in National Healthcare Safety Network Laboratory-IdentifiedClostridium...

8
Quantifying Sources of Bias in National Healthcare Safety Network Laboratory-Identified Clostridium difficile Infection Rates Author(s): Valerie B. Haley, MS; A. Gregory DiRienzo, PhD; Emily C. Lutterloh, MD, MPH; Rachel L. Stricof, MPH, CIC Source: Infection Control and Hospital Epidemiology, Vol. 35, No. 1 (January 2014), pp. 1-7 Published by: The University of Chicago Press on behalf of The Society for Healthcare Epidemiology of America Stable URL: http://www.jstor.org/stable/10.1086/674389 . Accessed: 28/06/2014 10:35 Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp . JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. . The University of Chicago Press and The Society for Healthcare Epidemiology of America are collaborating with JSTOR to digitize, preserve and extend access to Infection Control and Hospital Epidemiology. http://www.jstor.org This content downloaded from 91.223.28.76 on Sat, 28 Jun 2014 10:35:50 AM All use subject to JSTOR Terms and Conditions

Transcript of Quantifying Sources of Bias in National Healthcare Safety Network Laboratory-IdentifiedClostridium...

Page 1: Quantifying Sources of Bias in National Healthcare Safety Network Laboratory-IdentifiedClostridium difficileInfection Rates

Quantifying Sources of Bias in National Healthcare Safety Network Laboratory-IdentifiedClostridium difficile Infection RatesAuthor(s): Valerie B. Haley, MS; A. Gregory DiRienzo, PhD; Emily C. Lutterloh, MD, MPH;Rachel L. Stricof, MPH, CICSource: Infection Control and Hospital Epidemiology, Vol. 35, No. 1 (January 2014), pp. 1-7Published by: The University of Chicago Press on behalf of The Society for Healthcare Epidemiologyof AmericaStable URL: http://www.jstor.org/stable/10.1086/674389 .

Accessed: 28/06/2014 10:35

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at .http://www.jstor.org/page/info/about/policies/terms.jsp

.JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range ofcontent in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new formsof scholarship. For more information about JSTOR, please contact [email protected].

.

The University of Chicago Press and The Society for Healthcare Epidemiology of America are collaboratingwith JSTOR to digitize, preserve and extend access to Infection Control and Hospital Epidemiology.

http://www.jstor.org

This content downloaded from 91.223.28.76 on Sat, 28 Jun 2014 10:35:50 AMAll use subject to JSTOR Terms and Conditions

Page 2: Quantifying Sources of Bias in National Healthcare Safety Network Laboratory-IdentifiedClostridium difficileInfection Rates

infection control and hospital epidemiology january 2014, vol. 35, no. 1

o r i g i n a l a r t i c l e

Quantifying Sources of Bias in National Healthcare Safety NetworkLaboratory-Identified Clostridium difficile Infection Rates

Valerie B. Haley, MS;1 A. Gregory DiRienzo, PhD;2 Emily C. Lutterloh, MD, MPH;1,2 Rachel L. Stricof, MPH, CIC3

(See the commentary by McGregor and Harris, on pages 8–9.)

objective. To assess the effect of multiple sources of bias on state- and hospital-specific National Healthcare Safety Network (NHSN)laboratory-identified Clostridium difficile infection (CDI) rates.

design. Sensitivity analysis.

setting. A total of 124 New York hospitals in 2010.

methods. New York NHSN CDI events from audited hospitals were matched to New York hospital discharge billing records to obtainadditional information on patient age, length of stay, and previous hospital discharges. “Corrected” hospital-onset (HO) CDI rates werecalculated after (1) correcting inaccurate case reporting found during audits, (2) incorporating knowledge of laboratory results from outsidehospitals, (3) excluding days when patients were not at risk from the denominator of the rates, and (4) adjusting for patient age. Datasets were simulated with each of these sources of bias reintroduced individually and combined. The simulated rates were compared withthe corrected rates. Performance (ie, better, worse, or average compared with the state average) was categorized, and misclassificationcompared with the corrected data set was measured.

results. Counting days patients were not at risk in the denominator reduced the state HO rate by 45% and resulted in 8% misclassification.Age adjustment and reporting errors also shifted rates (7% and 6% misclassification, respectively).

conclusions. Changing the NHSN protocol to require reporting of age-stratified patient-days and adjusting for patient-days at riskwould improve comparability of rates across hospitals. Further research is needed to validate the risk-adjustment model before these datashould be used as hospital performance measures.

Infect Control Hosp Epidemiol 2014;35(1):1-7

Affiliations: 1. Bureau of Healthcare-Associated Infections, New York State Department of Health, Albany, New York; 2. Department of Epidemiologyand Biostatistics, State University of New York at Albany, New York; 3. Council of State and Territorial Epidemiologists, Atlanta, Georgia.

Received June 28, 2013; accepted September 5, 2013; electronically published November 26, 2013.� 2013 by The Society for Healthcare Epidemiology of America. All rights reserved. 0899-823X/2014/3501-0001$15.00. DOI: 10.1086/674389

Clostridium difficile causes gastrointestinal illness rangingfrom mild diarrhea to potentially fatal colitis.1 Between 2000and 2009, the number of hospitalizations with C. difficileinfection (CDI) more than doubled, from 49.2 to 109.6 per100,000 US population.2 New York required hospitals to re-port CDI using the Centers for Disease Control and Preven-tion (CDC) National Healthcare Safety Network (NHSN)laboratory-identified (LabID) infection surveillance protocol3

in July 2009. The measure was adopted by the Centers forMedicare and Medicaid Services (CMS) Hospital InpatientQuality Reporting Program in January 2013.4 Given the in-creased interest in and resources spent on monitoring CDI,it is important to understand the limitations of the measurewhen used to compare hospital performance.

In this article, we examine multiple sources of error thatmay impact reported hospital-onset (HO) CDI rates: (1) in-accurate case reporting, (2) exclusion of laboratory resultsfrom outside facilities, (3) inclusion of days when patients

were not at risk for developing CDI from the denominatorof the rates, and (4) not adjusting for patient age. By sim-ulating the above scenarios, we show the extent to whichthese errors alter HO rates and the interpretation of hospitalperformance in New York.

methods

The NHSN CDI LabID protocol requires reporting of allpositive laboratory results collected from unformed stoolspecimens; clinical evaluation is not used. Hospitals track CDIacross all inpatient areas, excluding neonatal intensive careunits and well-baby nurseries. Hospital staff enter the fol-lowing CDI case report information into NHSN: hospitalidentifier, patient medical record number, patient name, dateof birth, admission date, specimen date, and last dischargedate from that same hospital. The total number of admissionsand patient-days per month are entered.

This content downloaded from 91.223.28.76 on Sat, 28 Jun 2014 10:35:50 AMAll use subject to JSTOR Terms and Conditions

Page 3: Quantifying Sources of Bias in National Healthcare Safety Network Laboratory-IdentifiedClostridium difficileInfection Rates

2 infection control and hospital epidemiology january 2014, vol. 35, no. 1

table 1. New York Clostridium difficile Infection Audit Results, 2010

Determined by auditor

Reported by hospital Community onset Hospital onset Not case Total

Community onset 1,619 20 21 1,660Hospital onset 13 1,580 15 1,608Not case 93 97 0 190

Total 1,725 1,697 36 3,458

note. Auditors identified 5.5% more hospital-onset cases than originally re-ported by hospitals (1,697 compared with 1,608).

Cases for which specimens are obtained on the first, sec-ond, and third calendar day of a patient’s hospital stay aredesignated community onset (CO), while cases for whichspecimens are obtained on day 4 or later are designated HO.In addition, all cases are classified as incident or recurrenton the basis of the date of the last specimen. If there is anotherspecimen in NHSN for the same patient in the same hospitalcollected 2–8 weeks before the current specimen, the case isclassified as recurrent. If the specimen is the first within thepast 8 weeks, the case is classified as incident. Repeat spec-imens within 2 weeks are considered duplicates and notreported.

New York hospitals report administrative data on all hos-pital discharges through the Statewide Planning and ResearchCooperative System (SPARCS).5 Patient-specific informationon name (2 letters from first name, 4 letters from last name),date of birth, street/city/ZIP, Social Security number (last 4digits), medical record number, and gender were used tocreate and clean a patient-specific identification code, allow-ing patient discharges to be tracked over time both withinand between hospitals.

NHSN 2009–2011 cases were matched to SPARCS 2009–2011 discharges using a probabilistic scoring algorithm6,7

adapted for these data sets on the basis of medical recordnumber, birth date, admission date, first name, last name,gender, and CDI diagnosis, using SAS, version 9.3. EachNHSN CDI report was linked to the highest-scoring SPARCSrecord. For the 4% of the NHSN records with a mismatchon admission date, the patient was assigned the NHSN ad-mission date and a discharge date 10 days after the specimendate, on the basis of the average length of stay (LOS) afterthe CDI test observed in the database. Admissions occurringin 2010 were selected for analysis. The HO rate was definedas the number of incident HO cases reported to NHSN per10,000 patient-days reported to SPARCS. Patient-days peradmission was calculated as the discharge date minus theadmission date plus 1.

All 176 New York acute care hospitals (excluding VeteransAffairs, critical access, psychiatric, and long-term acute carehospitals) reported CDI data in 2010. The matched data setwas “corrected” to remove the sources of error under inves-tigation as follows.

Correction for Errors That Were Identified by New YorkStaff during Audits

To ensure the validity of the data reported to NHSN, NewYork staff audit a sample of hospitals each year. Hospitalswith unusually high or low healthcare-associated infectionrates (not limited to CDI) in the previous year, hospitals thatwere not audited in the previous year, and larger hospitalswere more likely to have been selected. During the audit,New York staff compared a laboratory-generated line list ofCDI cases for a selected time period with the data enteredinto NHSN. Disagreements were reviewed with the hospitalstaff at the end of the visit, and all were subsequently correctedin NHSN. The corrected data set used in this analysis islimited to the 124 (70%) hospitals whose 2010 data wereaudited to more clearly describe the impact of auditing.

Knowledge of Laboratory Results from Outside a Facility

CDI is classified as CO or HO within each hospital only onthe basis of laboratory testing performed within that hospitalbecause hospitals do not have access to laboratory data fromother facilities and because NHSN only allows data entry ofevents occurring within one’s own reporting hospital. How-ever, CDI patients may be admitted to a hospital with a re-current infection that appears as incident from the perspectiveof the admitting hospital. The frequency of this occurrencewas estimated by following patient admissions across hos-pitals. In the corrected data, cases with specimens that werecollected within 8 weeks of a previous test at a different hos-pital were not included in the incident HO count.

Correcting the Denominator to Count Only DaysPatients Are at Risk for a HO Event

NHSN CDI rates are calculated using the total number ofpatient-days per month in the denominator. However, bydefinition patients cannot have an incident HO event re-corded during the first 3 days of a hospital stay or in the 8weeks following a previous positive specimen. The 8 weeksinclude days in the hospital immediately following the pos-itive specimen and prior to discharge and days during sub-sequent readmissions. In the corrected data set, these nonriskdays were not counted. In a subanalysis, the impacts of the

This content downloaded from 91.223.28.76 on Sat, 28 Jun 2014 10:35:50 AMAll use subject to JSTOR Terms and Conditions

Page 4: Quantifying Sources of Bias in National Healthcare Safety Network Laboratory-IdentifiedClostridium difficileInfection Rates

quantifying bias in cdi rates 3

table 2. Hospital-Onset (HO) Clostridium difficile Incidence Rates in 124 New York State Hospitals in 2010 under Different Bias Scenarios

ScenarioState average

HO ratea State bias, %b

Hospitals in a betterperformancegroup, no.c

Hospitals in a worseperformancegroup, no.d

Total hospitals in adifferent performance

group, no. (%)

Corrected (audited, outside lab results,exact denominator, age adjustment) 11.6 NA NA NA NA

No auditing 11.5 �1.4 6 3 9 (7)No outside lab results 11.8 1.3 2 2 4 (3)No exclusion of patient-days not at risk 6.4 �44.7 6 4 10 (8)No age adjustment 11.6 0 4 4 8 (6)All biases (no auditing, no outside lab

results, no exclusion of days not atrisk, no age adjustment) 6.4 �44.7 8 7 15 (12)

Corrected numerator and length of stayestimated denominatore 11.7 0.2 1 2 3 (2)

note. NA, not applicable.a Incident cases per 10,000 patient-days, adjusted for test method and age except where noted.b .Bias p 100 # (scenario rate � corrected rate)/corrected ratec Hospitals were categorized into 3 performance groups (“better,” “okay,” or “worse” than the state average) for each scenario. Being ina better performance group means that the hospital was categorized as better according to this scenario and okay using the corrected dataor that the hospital was categorized as okay according to this scenario and worse using the corrected data.d The hospital was categorized as worse according to this scenario and okay using the corrected data, or the hospital was categorized asokay according to this scenario and better using the corrected data.e , where .Estimated denominator p crude denominator # exp(A)/[1 � exp(A)] A p �3.08 � 1.74 # ln(length of stay)

table 3. Model to Predict Hospital-Onset Clostridium diffi-cile Infections Using Corrected New York 2010 Data

Risk factorRelative riska

(95% confidence interval)

Age0–9 years 1.8 (1.3–2.5)10–49 years 1.0 (reference)50–74 years 2.1 (1.8–2.5)≥75 years 3.3 (2.7–3.9)

Laboratory test methodNucleic acid amplification test 1.5 (1.3–1.9)Other 1.0 (reference)

a Negative binomial model. Data set includes corrections foundduring audit and from knowledge of test results collected byother hospitals, and it only counts days patients were at riskfor a recorded hospital-onset infection.

first 3 days of hospitalization and the 8 weeks after the eventwere examined separately.

Improving Risk Adjustment Using Patient Age

On the assumption that it would be feasible for hospitals tostratify patient-days by age group using administrative data,we categorized the patients into age groups on the basis ofthe observed association between age and corrected HO rateand added age to the risk-adjustment model.

NHSN summarizes hospital performance using a stan-dardized infection ratio (SIR). The SIR compares the hos-pital’s data with the NHSN baseline (759 facilities that re-ported CDI data in 2010–2011) using a negative binomialregression model that adjusts for laboratory testing method,admission prevalence CDI rate, facility bed size, and medicalschool affiliation.8 Hospitals are designated as performing sig-nificantly better or worse than the NHSN baseline by com-paring the 95% confidence interval around the SIR to 1. Theanalysis presented here is conceptually similar; however, thereference population is the 124 audited hospitals in New YorkState in 2010, the risk-adjustment variables are different, andthe SIR is converted to a rate by multiplying the SIR by theNew York average 2010 HO rate. Admission prevalence wasnot included in our model because the observed associationbetween admission prevalence and new HO cases may berelated to poor infection prevention practices that allow dis-ease transmission and thus more reflective of the hospitalperformance we are trying to measure than inherent patientrisk for CDI. Similarly, facility bed size and medical schoolaffiliation may reflect the quality of patient care. Our model

predicted the number of HO cases in each hospital on thebasis of age group and laboratory test methodology (nucleicacid amplification test vs other). Hospitals were categorizedas performing significantly better, worse, or the same as thestate average; these will be called “performance groups.”

Simulated data sets were created by reintroducing thesources of error into the corrected data set, one at a time andcombined. Additionally, a sixth simulated data set was createdto test a simple method of approximating days at risk: it wouldbe unfeasible to collect and report the exact number of daysat risk per patient for routine surveillance purposes becausethis would require each hospital to link laboratory testingdates into admission/discharge data and calculate LOS by

This content downloaded from 91.223.28.76 on Sat, 28 Jun 2014 10:35:50 AMAll use subject to JSTOR Terms and Conditions

Page 5: Quantifying Sources of Bias in National Healthcare Safety Network Laboratory-IdentifiedClostridium difficileInfection Rates

4 infection control and hospital epidemiology january 2014, vol. 35, no. 1

Figure 1. Comparison of hospital-onset Clostridium difficile rates un-der different bias scenarios. Each point represents 1 hospital, and theshape signifies the hospital performance (ie, hospital rate is better,worse, or okay compared with the state average) for the data on thevertical axis. The narrow bottom plot (“Ref ”) shows the hospitalperformance groups for the corrected rates. Rates are the number ofincident hospital-onset cases per 10,000 patient-days, adjusted by agegroup and laboratory test method except as noted in the third plot.One hospital with an outlying corrected rate of 41 is not shown toimprove resolution. The diagonal line represents perfect correlationof the rate on the vertical axis with the rate on the horizontal axis.

patient. Instead, regression was used to define an equationthat predicts the proportion of total hospital patient-days thatwere truly at risk (ie, exact number of at-risk days dividedby total hospital patient-days) on the basis of average LOS(ie, hospital patient-days divided by admissions, which arealready collected for NHSN). The approximate days at risk

were calculated by multiplying the total hospital patient-daysfrom NHSN by the predicted proportion of total days thatwere truly at risk. HO rates and performance groups for eachof the above-described scenarios were calculated using thesame method as described earlier, using the scenario-specificdata as the reference population.

The overall bias for each scenario was expressed as thepercent difference between the scenario state rate and thecorrected state rate. The overall bias shows the degree towhich different definitions affect the magnitude of the CDIrate. The hospital-specific bias was shown in 2 ways: (1) bycomparing the scenario rates to the corrected rates usingcorrelation plots and calculating the average absolute differ-ence in rates and (2) by calculating the number and per-centage of hospitals that were placed in different performancegroups in the scenario data set compared with the correcteddata set. Hospital-specific bias is important when assessingperformance among subgroups—for example, comparinghospital rates to the state average, as in this example, orcomparing hospital rates to the national baseline or state ratesto the national baseline in other applications.

results

During the 2010 CDI audit, New York staff reviewed an av-erage of 28 laboratory reports per hospital, for a total of 3,458records (Table 1). Overall, the auditors agreed with the re-ported hospital categorization 93% of the time. Across hos-pitals, the agreement rate ranged from 45% to 100% (48%of hospitals had 100% agreement). Auditors identified a totalof 1,697 HO cases, an increase of 5.5% over the 1,608 thatwere originally reported. The most common error was notentering a laboratory record into NHSN. This occurred frominconsistent surveillance processes, as all records were man-ually entered monthly. Errors in admission or specimen datesimpacted categorization as CO versus HO, typographical er-rors in medical record number resulted in duplicate entries,and overreported cases were identified when documentationof appropriate laboratory results was not found or specimenswere not obtained in an inpatient area.

The average statewide HO rates calculated according toeach scenario and the overall bias compared with the cor-rected rate are given in Table 2. Counting all patient-days

This content downloaded from 91.223.28.76 on Sat, 28 Jun 2014 10:35:50 AMAll use subject to JSTOR Terms and Conditions

Page 6: Quantifying Sources of Bias in National Healthcare Safety Network Laboratory-IdentifiedClostridium difficileInfection Rates

quantifying bias in cdi rates 5

figure 2. Association between average hospital length of stay (LOS) and proportion of total patient-days at risk for new hospital-onsetinfection. Each symbol represents 1 hospital. Average hospital LOS p patient-days/admissions; proportion of days at risk for hospital-onsetinfection p patient-days at risk/all patient-days. To correct total patient-days to days at risk using this model, multiply total patient-days bythe predicted proportion (curve connecting points in graph). , wherePredicted proportion p exp(A)/[1 �exp(A)] A p �3.08 � 1.74 #

.ln(LOS)

instead of only patient-days at risk reduced the state rate by45% compared with the corrected data set. Age adjustmentdid not impact the overall state rate because the state wasthe reference population. Not auditing any of the data, com-pared with auditing approximately 25% of the data from100% of the audited hospitals in the corrected data set, de-creased the state rate by 1.4%. Not accounting for testingdone at other facilities increased the rate by 1.3%. The re-mainder of this section highlights the extent to which theerrors differentially affected some hospital rates more thanothers.

Risk-adjustment model results using the corrected data setare shown in Table 3. Patient age and laboratory test methodwere significantly associated with HO rates.

Figure 1 (top plot) shows the distribution of reportingaccuracy across hospitals. If the hospitals had not been au-dited (and the data not subsequently corrected), 5% of hos-

pitals would have been in better performance groups, and2% would have been in worse performance groups.

Reporting total patient-days rather than days at risk alsocaused misclassification, resulting in 5% of hospitals movingto a better performance group and 3% to a worse perfor-mance group. Counting the first 3 days (which impacts allpatients) had a larger impact than counting the 8-week post-test period (which affects only cases). Average LOS variedbetween 4 and 10 days among hospitals and was stronglycorrelated with the percentage of total reported days that weretruly at risk (Figure 2). Counting the first 3 days impactedhospitals with short average LOS the most; less than 35% oftotal patient-days were truly at risk in the hospitals with theshortest LOS. The impact of counting the 8-week posttestperiod also varied systematically; the rates at hospitals withthe highest rates decreased more than the rates at the hospitalswith the lowest rates because the more cases that are reported,

This content downloaded from 91.223.28.76 on Sat, 28 Jun 2014 10:35:50 AMAll use subject to JSTOR Terms and Conditions

Page 7: Quantifying Sources of Bias in National Healthcare Safety Network Laboratory-IdentifiedClostridium difficileInfection Rates

6 infection control and hospital epidemiology january 2014, vol. 35, no. 1

figure 3. Relationship between hospital-onset Clostridium difficile incidence rate and patient age. Each point is the number of cases per10,000 patient-days for patients at the given age, using the New York 2010 corrected data set. The points are connected by a smooth lineshowing the overall trend. Vertical lines show the 4 age groups used in this analysis: 0–9, 10–49, 50–74, and more than or equal to 75 years.

the more patient-days will be excluded from the exact de-nominator. Correcting the denominator on the basis of therelationship shown in Figure 2 reduced the overall bias inthe HO rates, and only 2% of hospitals were misclassified byperformance group.

Figure 3 shows the association between age and HO rate.Patients were divided into 4 age groups (0–9, 10–49, 50–74,and more than or equal to 75 years) on the basis of the shapeof this curve. Average patient age varied widely across NewYork hospitals, from a low of 15 years at a hospital specializingin the care of women and children to a high of 71 years at2 community hospitals and a specialty cardiac center. Lackof age adjustment shifted hospital rates an average of 1.4 casesper 10,000 patient-days in either direction, with a maximumshift of 6.9 cases per 10,000 patient-days (Figure 1, third plot).This resulted in 3% of hospitals moving to a better perfor-mance group and 3% to a worse group.

Lack of information on CDI test results obtained outsidethe reporting hospital had minimal impact on HO rates. Hos-pital rates shifted, on average, 0.2 cases per 10,000 patient-days, and only 3% of hospitals were in different performancegroups.

Simultaneously considering all 4 biases had the largestoverall impact; hospital rates are both spread in a wide bandand shifted off the diagonal (Figure 1, fourth plot). Six percentof hospitals were misclassified into better performance groupsand 6% into worse groups.

discussion

Auditing has a small impact on interpreting HO performancewhen comparing audited hospitals to nonaudited hospitals.

In our systematic sample of all events reported during a con-tinuous time interval in each hospital, reporting errors re-sulted in a 5.5% underestimation of the HO rate. A limitationof these results is that the hospitals were not randomly se-lected for audit. The impact of reporting errors should declineover time as hospitals develop the capacity to directly uploadlaboratory data into NHSN.

Age is one of the most important risk factors for CDI. Theelderly are at higher risk due to factors such as immunesenescence, frequent exposure to antibiotics and other med-ications that affect the gastrointestinal tract, presence of un-derlying diseases, and greater potential for environmentalexposure in nursing homes and through repeated hospitali-zations.9 In our analysis, children under 10 years old also hadan elevated incidence of CDI. This observation may have beenaccentuated by correcting LOS for days at risk, because thisage group had the shortest LOS. Zilderberg et al10 also foundthat hospitalized children aged nonnewborn to 9 years hada higher incidence than did other children. Without adjustingfor age, hospitals with an older patient base would be pe-nalized compared with hospitals with more moderately agedpatients in public reports of CDI rates. This problem couldeasily be addressed by requiring collection of age-stratifiedCDI data within NHSN and including age in the risk-adjustment model.

Differences in LOS between hospitals also impact accuratecomparison of HO rates that are based on total (rather thanat-risk) patient-days. The incidence density denominatorshould exclude patient-days when patients are not at risk forinfection.11 HO rates at hospitals with shorter LOS are biaseddownward more than the rates at hospitals with longer LOS.

This content downloaded from 91.223.28.76 on Sat, 28 Jun 2014 10:35:50 AMAll use subject to JSTOR Terms and Conditions

Page 8: Quantifying Sources of Bias in National Healthcare Safety Network Laboratory-IdentifiedClostridium difficileInfection Rates

quantifying bias in cdi rates 7

This problem could be addressed by modifying the denom-inator of rate calculations on the basis of the strong rela-tionship between average LOS and the proportion of patient-days at risk or by stratifying hospitals by LOS or the type ofpatient areas often related to long LOS (eg, inpatient psy-chiatric areas whose patients are at low risk for CDI).

Knowledge of CDI test results obtained by hospitals outsidethe reporting hospital had minimal impact on HO rates, al-though the impact of test results in outpatient locations wasnot assessed. Consideration of all 4 errors simultaneously, ascurrently reported in NHSN by unaudited hospitals, resultedin a moderate amount of misclassification (12%). As LOS,age, and data accuracy may vary regionally, these results maynot be generalizable nationwide.

Given the large and increasing number of reported CDIcases, many stakeholders want to compare CDI rates betweenhospitals. New York public reports12 do not statistically com-pare CDI rates because of insufficient patient-level data toadjust for differences in patient risk for CDI and uncertaintyin the timing of infection (ie, the incubation period may varywith patient risk factors; some HO patients may have beencolonized prior to admission). Rather, hospitals are flaggedwhen they show significant increases or decreases over timethat are not attributable to changes in testing methods. Cur-rent NHSN CDI surveillance is useful within hospitals toevaluate trends and the impact of interventions aimed atreducing rates. Variation in audit coverage across the countryresults in inequitable comparison of hospital and state rates;a fair and efficient process that incorporates the needs offacilities, states, the CDC, and the CMS is needed. Modifi-cations to NHSN to allow risk adjustment for age and cor-rection for differences in LOS would improve comparabilityof rates between hospitals. However, further research isneeded to validate the risk-adjustment model before thesedata should be used as hospital performance measures.

acknowledgments

We thank current and past New York healthcare-associated infection (HAI)program staff Peggy Hazamy, Marie Tsivitis, Boldtsetseg Tserenputsag, CaroleVanAntwerpen, Kate Gase, Diane Doughty, KuangNan Xiong, and VictorTucci for assisting New York hospitals in HAI reporting and validating the2010 Clostridium difficile infection data.

Financial support. The New York State Department of Health receivedEpidemiology and Laboratory Capacity for Infectious Diseases (ELC) fundingfrom the Centers for Disease Control and Prevention, which partially sup-ported this work.

Potential conflicts of interest. All authors report no conflicts of interestrelevant to this article. All authors submitted the ICMJE Form for Disclosureof Potential Conflicts of Interest, and the conflicts that the editors considerrelevant to this article are disclosed here.

Address correspondence to Valerie B. Haley, MS, Bureau of Healthcare-Associated Infections, New York State Department of Health, Corning Tower,Room 523, Albany, NY 12237 ([email protected]).

references

1. Johnson S, Gerding DN. Clostridium difficile. In: Mayhall CG,ed. Hospital Epidemiology and Infection Control. 3rd ed. Phila-delphia: Lippincott Williams & Wilkins, 2005:623–634.

2. Lucado J, Gould C, Elixhauser A. Clostridium difficile infections(CDI) in hospital stays, 2009. Statistical brief 124. HealthcareCost and Utilization Project (HCUP) Statistical Briefs. Rockville,MD: US Agency for Health Care Policy and Research, 2012.

3. National Healthcare Safety Network (NHSN) manual: patientsafety component protocol. Centers for Disease Control andPrevention website. http://www.cdc.gov/nhsn/. Accessed April23, 2013.

4. Centers for Medicare and Medicaid Services. Acute InpatientProspective Payment System. http://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/AcuteInpatientPPS/index.html. Accessed August 18, 2013.

5. Statewide Planning and Research Cooperative System (SPARCS)overview. New York State Department of Health website.http://www.health.ny.gov/statistics/sparcs/. Accessed February18, 2013.

6. Whalen D, Pepitone A, Graver L, Busch JD. Linking client rec-ords from substance abuse, mental health and Medicaid stateagencies. Technical monograph. Rockville, MD: US Departmentof Health and Human Services Substance Abuse and MentalHealth Services Administration, 2001.

7. Campbell KM. Rule your data with the Link King� (a SAS/AF�

application for record linkage and unduplication). In: Proceed-ings of SAS Users Group International. April 10–13, 2005, Phil-adelphia, PA. Paper 020-30.

8. Dudeck MA, Weiner LM, Malpied PJ, Edwards JR, Peterson KD,Sievert DM. Risk adjustment for healthcare facility-onset C. dif-ficile and MRSA bacteremia laboratory-identified event report-ing in NHSN. National Healthcare Safety Network website.http://www.cdc.gov/nhsn/PDFs/mrsa-cdi/RiskAdjustment-MRSA-CDI.pdf. Accessed April 23, 2013.

9. Simor AE. Diagnosis, management, and prevention of Clostrid-ium difficile infection in long-term care facilities: a review. J AmGeriatr Soc 2010;58:1556–1564.

10. Zilderberg MD, Tillotson GS, McDonald LC. Clostridium difficileinfections among hospitalized children, United States, 1997–2006. Emerg Infect Dis 2010;16:604–609.

11. Huang SS, Avery TR, Song Y, et al. Quantifying interhospitalpatient sharing as a mechanism for infectious disease spread.Infect Control Hosp Epidemiol 2010;31:1160–1169.

12. Hospital-acquired infections—New York State 2011. New YorkState Department of Health website. http://www.health.state.ny.us/statistics/facilities/hospital/hospital_acquired_infections/.Published 2012. Accessed February 18, 2012.

This content downloaded from 91.223.28.76 on Sat, 28 Jun 2014 10:35:50 AMAll use subject to JSTOR Terms and Conditions