Studying large-scale programmes to improve patient safety in whole care systems: Challenges for...

10
Studying large-scale programmes to improve patient safety in whole care systems: Challenges for research Jonathan Benn * , Susan Burnett, Anam Parand, Anna Pinto, Sandra Iskander, Charles Vincent Department of Biosurgery and Surgical Technology, Imperial College London, St Mary’s Campus, QEQM Building Praed Street, London W2 1NY, UK article info Article history: Available online 23 October 2009 Keywords: Patient safety Quality improvement programmes Health care organisations Methods Safer Patients Initiative UK abstract Large-scale national and multi-institutional patient safety improvement programmes are being devel- oped in the health care systems of several countries to address problems in the reliability of care delivered to patients. Drawing upon popular collaborative improvement models, these campaigns are ambitious in their aims to improve patient safety in macro-level systems such as whole health care organisations. This article considers the methodological issues involved in conducting research and evaluation of these programmes. Several specific research challenges are outlined, which result from the complexity of longitudinal, multi-level intervention programmes and the variable, highly sociotechnical care systems, with which they interact. Organisational-level improvement programmes are often underspecified due to local variations in context and organisational readiness for improvement work. The result is variable implementation patterns and local adaptations. Programme effects span levels and other boundaries within a system, vary dynamically or are cumulative over time and are problematic to understand in terms of cause and effect, where concurrent external influences exist and the impact upon study endpoints may be mediated by a range of organisational and social factors. We outline the methodological approach to research in the United Kingdom Safer Patients Initiative, to exemplify how some of the challenges for research in this area can be met through a multi-method, longitudinal research design. Specifically, effective research designs must be sensitive to complex vari- ation, through employing multiple qualitative and quantitative measures, collect data over time to understand change and utilise descriptive techniques to capture specific interactions between pro- gramme and context for implementation. When considering the long-term, sustained impact of an improvement programme, researchers must consider how to define and measure the capability for continuous safe and reliable care as a property of the whole care system. This requires a sociotechnical approach, rather than focusing upon one microsystem, disciplinary perspective or single level of the system. Ó 2009 Elsevier Ltd. All rights reserved. Introduction In the UK and other countries, large-scale national and multi- institutional patient safety improvement programmes are begin- ning to emerge, to address the problems of patient safety and reliability in care that have been highlighted by a series of influ- ential reports (eg. Dept. Health, 2000; Kohn, Corrigan, & Donaldson, 2000). Such safety improvement programmes and campaigns are ambitious in their aims and represent intervention on a scale not seen before in health care. Current national campaigns are being designed on the basis that it is only through organisational-level development that any gains made in patient safety will be sus- tained and may be replicated throughout the system. As our understanding of the origins and causes of failures grows, practical knowledge concerning how to rectify the problems and improve systems lags somewhere behind. Until recently, the majority of improvement initiatives may be considered to have been focused at the microsystems level within a health care orga- nisation. There is now growing recognition, that patient safety and the capacity of an organisation to deliver consistent, high-quality and failure-free care is both a systemic issue and one that needs to be addressed at the level of the whole organisation or care system. If we are to understand how large-scale programmes can become effective in meeting these aims, we need research designs that are sensitive to the complexity involved in intervening to change whole systems. In this article, we discuss the challenges for research into large-scale patient safety improvement programmes, drawing * Corresponding author. Tel.: þ44 (0)20 759 43487. E-mail address: [email protected] (J. Benn). Contents lists available at ScienceDirect Social Science & Medicine journal homepage: www.elsevier.com/locate/socscimed 0277-9536/$ – see front matter Ó 2009 Elsevier Ltd. All rights reserved. doi:10.1016/j.socscimed.2009.09.051 Social Science & Medicine 69 (2009) 1767–1776

Transcript of Studying large-scale programmes to improve patient safety in whole care systems: Challenges for...

Page 1: Studying large-scale programmes to improve patient safety in whole care systems: Challenges for research

lable at ScienceDirect

Social Science & Medicine 69 (2009) 1767–1776

Contents lists avai

Social Science & Medicine

journal homepage: www.elsevier .com/locate/socscimed

Studying large-scale programmes to improve patient safety in whole caresystems: Challenges for research

Jonathan Benn*, Susan Burnett, Anam Parand, Anna Pinto, Sandra Iskander, Charles VincentDepartment of Biosurgery and Surgical Technology, Imperial College London, St Mary’s Campus, QEQM Building Praed Street, London W2 1NY, UK

a r t i c l e i n f o

Article history:Available online 23 October 2009

Keywords:Patient safetyQuality improvement programmesHealth care organisationsMethodsSafer Patients InitiativeUK

* Corresponding author. Tel.: þ44 (0)20 759 43487E-mail address: [email protected] (J. Benn).

0277-9536/$ – see front matter � 2009 Elsevier Ltd.doi:10.1016/j.socscimed.2009.09.051

a b s t r a c t

Large-scale national and multi-institutional patient safety improvement programmes are being devel-oped in the health care systems of several countries to address problems in the reliability of caredelivered to patients. Drawing upon popular collaborative improvement models, these campaigns areambitious in their aims to improve patient safety in macro-level systems such as whole health careorganisations. This article considers the methodological issues involved in conducting research andevaluation of these programmes. Several specific research challenges are outlined, which result from thecomplexity of longitudinal, multi-level intervention programmes and the variable, highly sociotechnicalcare systems, with which they interact. Organisational-level improvement programmes are oftenunderspecified due to local variations in context and organisational readiness for improvement work.The result is variable implementation patterns and local adaptations. Programme effects span levels andother boundaries within a system, vary dynamically or are cumulative over time and are problematic tounderstand in terms of cause and effect, where concurrent external influences exist and the impact uponstudy endpoints may be mediated by a range of organisational and social factors.

We outline the methodological approach to research in the United Kingdom Safer Patients Initiative, toexemplify how some of the challenges for research in this area can be met through a multi-method,longitudinal research design. Specifically, effective research designs must be sensitive to complex vari-ation, through employing multiple qualitative and quantitative measures, collect data over time tounderstand change and utilise descriptive techniques to capture specific interactions between pro-gramme and context for implementation. When considering the long-term, sustained impact of animprovement programme, researchers must consider how to define and measure the capability forcontinuous safe and reliable care as a property of the whole care system. This requires a sociotechnicalapproach, rather than focusing upon one microsystem, disciplinary perspective or single level of thesystem.

� 2009 Elsevier Ltd. All rights reserved.

Introduction

In the UK and other countries, large-scale national and multi-institutional patient safety improvement programmes are begin-ning to emerge, to address the problems of patient safety andreliability in care that have been highlighted by a series of influ-ential reports (eg. Dept. Health, 2000; Kohn, Corrigan, & Donaldson,2000). Such safety improvement programmes and campaigns areambitious in their aims and represent intervention on a scale notseen before in health care. Current national campaigns are beingdesigned on the basis that it is only through organisational-level

.

All rights reserved.

development that any gains made in patient safety will be sus-tained and may be replicated throughout the system.

As our understanding of the origins and causes of failures grows,practical knowledge concerning how to rectify the problems andimprove systems lags somewhere behind. Until recently, themajority of improvement initiatives may be considered to havebeen focused at the microsystems level within a health care orga-nisation. There is now growing recognition, that patient safety andthe capacity of an organisation to deliver consistent, high-qualityand failure-free care is both a systemic issue and one that needs tobe addressed at the level of the whole organisation or care system.If we are to understand how large-scale programmes can becomeeffective in meeting these aims, we need research designs that aresensitive to the complexity involved in intervening to change wholesystems. In this article, we discuss the challenges for research intolarge-scale patient safety improvement programmes, drawing

Page 2: Studying large-scale programmes to improve patient safety in whole care systems: Challenges for research

J. Benn et al. / Social Science & Medicine 69 (2009) 1767–17761768

upon our experience of developing research into the Safer PatientsInitiative, a large-scale improvement programme in the UnitedKingdom.

Research into organisational-level improvement programmes

Several authors have drawn attention to the limited evidencebase for the efficacy of large-scale improvement programmes inhealth care (eg. Mittman, 2004; Shojania & Grimshaw, 2005). Themajority of the available research relates to the popular break-through collaborative programme model (Institute for HealthcareImprovement, 2004), which involves teams from multiple institu-tions working together to focus upon improvement in a specificclinical area (eg. Bate, Robert, & McLeod, 2002; Kilo, 1998). Ratherthan focusing upon specific microsystems, recent campaigns in thepatient safety arena have begun to focus upon whole organisations,including strategy and leadership for safety improvement, to ach-ieve macro- or systems-level results (Nolan, 2007). Well-publicisedexamples include the U.S. 100,000 Lives Campaign and the UK SaferPatients Initiative. Many of the findings and issues raised byexisting research into collaboratives are applicable to thesecampaigns, regardless of the level of focus. Scaling up to wholeorganisational systems, however, does raise its own issues, whichwill be discussed shortly.

Ovretveit and Gustafson (2002) comment that little researchevidence exists regarding the effectiveness of quality improvementprogrammes that target whole organisations or health caresystems, partly due to their nature as complex, changing, large-scale social interventions. Similarly, the authors of a recentsystematic review in this area concluded that evidence for theeffectiveness of quality improvement collaboratives was positivebut limited (Schouten, Hulscher, Everdingen, Huijsman, & Grol,2008). Despite these limitations, research highlights severalcommonly reported factors important for success. These include:senior management and board commitment, fostering receptivityto change, engaging clinicians in quality improvement, imple-menting quality reporting processes, developing safety culture andfostering staff-driven process improvement that engages thefrontline (Gollop, Whitby, Buchanan, & Ketley, 2004; Vaughn et al.,2006; Walley, Rayment, & Cooke, 2006).

The current research methodologies that have been brought tobear upon organisation-wide programmes vary considerably andrepresent the contribution of several disciplines, including the

Factor Description

Scale of systems targeted

Improvement programmes at this lewith a high degree of inherent cosystems). These systems comprise mcomplex work and information flowcoherent care pathways.

Diversity of subsystems

affected

A large range of different types organisational level programmes: estrategy and policy, reporting and iThis complexity means that any oparameters dependent upon local con

Complexity of

programme design

Systems-level improvement programsystems and implemented at variouelements for transfer to the target ofor clinical processes) or more gentools).

Longitudinal timescale

Improvement programmes at this cumulative developmental effects. Tinteractions between the programmnumber of external influences. Oresource availability and personnel o

Fig. 1. Antecedents of complexity in large-scale o

social sciences. In addition to prospective controlled studies andpseudo-experimental designs, many examples of qualitative andmixed-methods designs may be found (Ayers, 2005; Bate et al.,2002; Bradley et al., 2001). In Schouten et al.’s. (2008) review, onlynine scientifically rigorous controlled studies with reportedoutcomes were identified. Outcome measures varied depending onthe focus of the improvement programme, with limited focus uponprocess measures and the vast majority of studies utilisinguncontrolled designs. The authors concluded that limitations in theevidence base for improvement programmes was due in part toheterogeneity in improvement programme designs and theresearch methods used to evaluate them.

The challenges for research design

Large-scale improvement programmes provide a number ofchallenges for research design, due to the inherent complexity inattempting to achieve effects in large-scale adaptive sociotechnicalsystems, such as a hospital site or whole health care organisation. Inthe discussion that follows, we first consider the research chal-lenges that must be overcome to improve our understanding ofthese programmes, before describing a specific example of appliedresearch in this area, to exemplify the challenges and how theymight be resolved in practice through the individual components ofa mixed-methods design.

Complexity holds considerable implications for care deliverysystems (Plsek & Greenhalgh, 2001) and arises as a result of boththe challenging nature of the care delivery task and the way inwhich care delivery systems are organised to meet these chal-lenges; the difference between so-called intrinsic and inducedcomplexity (Sinclair, 2007). We are specifically concerned herewith complexity in the act of intervening to change systems.Complexity in this case is a direct result of four properties ofsystems-level improvement programmes (Fig. 1). The result isseveral specific challenges for understanding processes at this level,which we will describe in the following sections.

Multiple starting conditions for participating organisations

If research is to measure organisational change as a result of animprovement programme, the starting point or pre-implementa-tion conditions must first be defined. Health care organisations andcare delivery systems vary in terms of several key dimensions: size,

vel influence large-scale sociotechnical systems mplexity (e.g. whole hospital or organisational ultiple, semi-autonomous subunits connected by

s to facilitate coordinated action in the delivery of

of sociotechnical subsystems are impacted by g. clinical work systems, managerial processes, nformation systems, and cultural value systems. rganisational system will be unique in certain ditions. mes require diverse elements targeting different

s levels of the host organisation. Programme rganisation may be highly specific (eg. protocols eric (eg. quality improvement methodology and

level take place in time and therefore have his provides opportunity for a range of dynamic

e and participating organisation, as well as any rganisations are variable in terms of structure, ver a longitudinal timeframe.

rganisational-level intervention programmes.

Page 3: Studying large-scale programmes to improve patient safety in whole care systems: Challenges for research

J. Benn et al. / Social Science & Medicine 69 (2009) 1767–1776 1769

regional location, internal structure, management processes,history, external regulatory environment, culture and leadership,all of which may exert some influence upon an organisation’sreadiness and capacity for improvement. The necessary precondi-tions for successful organisational or systems-level change arediverse and have been the subject of considerable attention inresearch, reviews and theory (Garside, 1998; Gustafson et al., 2003;Pettigrew, 1990; Stetler, McQueen, Demakis, & Mittman, 2008;Weiner, Amick, & Lee, 2008). Research into the capacity for clinicalsystems improvement in a UK NHS context, for example, has shownthat considerable variation exists between top and lower-per-forming organisations, with high variability in terms of serviceimprovement practices, systems and structure for improvementand use of quality improvement tools (Walley et al., 2006).

Research must seek to understand initial variability betweenorganisations in order to understand the course of developmentand response of the organisation to an improvement programme.Organisations that are low in terms of maturity in a target area,such as patient safety, may gain far more from the same pro-gramme than organisations that are more advanced at programmeonset. One common programme design or change model may bemore or less suitable for different individual sites with differentinternal and contextual conditions. The challenge for research ishow to assess initial capability in an area such as patient safetyimprovement and then measure progress. The important implica-tion are that research must take a longitudinal or developmentalperspective, in which data is captured over time, and must attemptto describe local context and variation in specific case studies(eg. Bate, Mendel, & Robert, 2008).

Underspecification and non-standardisation of intervention models

Underspecification of a programme or its components may leadto variability in the way it is implemented and a consequent loss ofability to attribute changes in outcome to a specific, unified inter-vention model. Underspecification results in a lack of stand-ardisation and occurs for a variety of reasons, including insufficientdescription of system changes and how they will be implemented.Variation in local context for implementation can result in localadaptation to ensure that programme elements integrate withlocal work systems and the broader organisational environment.Elements of the programme itself may foster specific unique solu-tions to quality issues, such as autonomous frontline improvementcycles. On a systems level, variable implementations of programmesbetween organisations can occur due to difficulties in definingboundaries for the systems affected by the programme and inclearly delineating the onset and cessation of a discrete interventioneffort.

From a research viewpoint, lack of standardisation of theintervention across different sites limits the ability to pool data,where sample populations, spread patterns and definition ofmeasures for data collection may vary. This additionally hampersthe ability to compare sites for research purposes and limits thegeneralisability of any findings beyond their original context. Froma practical point of view, inability to define a model for interventionlimits the repeatability or reproducability of an intervention acrossprogrammes and contexts, and ultimately limits our ability to learnfrom repeated cycles of implementation and evaluation.

The level of effect: micro and macro systems

Organisations are complex systems with multiple embeddedsub-levels (Sinclair, 2007). A considerable problem for research is atwhat level to focus measurement. Complex quality improvementprogrammes often comprise multiple microsystems level changes,

as well as interventions designed to impact at a higher, morepervasive level within an organisation. A growing body of literaturein the health care quality domain focuses upon the clinical micro-system as a meaningful level at which to target systems designefforts and understand care delivery performance (Barach & John-son, 2006; Batalden, Nelson, & Godfrey, 2007; Mohr & Batalden,2002). This approach, however, does not necessarily address themacro-level context, needed to support individual microsystemchanges.

An assumption implicit in system-wide improvement pro-grammes is that the aggregated effect of multiple micro-levelinterventions will be the achievement of ‘‘breakthrough’’improvement in macro-level properties, such as overall systemreliability, quality and safety of care and organisational culture. Theissue for the researcher is how to understand the cumulative effectsof intervention across multiple microsystems. Variable effectsacross individual sub-units may limit visible impact at the organ-isational level, despite the possible presence of some high-per-forming microsystems that may be masked by more mediocreperformance in the majority.

Organic diffusion of change and spread patterns

Given the focus upon development in clinical microsystems,improvement methodology must include some deliberate phasedprocess of spreading changes throughout the system. Incrementalroll out of changes, however, is often not regulated at the pro-gramme level, other than by general targets for spread set byprogramme leaders. Rather, the decision when to spread whichelements and to which populations is made at local level byautonomous work stream leads. Spread patterns do not alwaysfollow organisational sub-divisions or unit boundaries, especiallywhere a particular change may be piloted in multiple units simul-taneously. Consequently, at any one research time point during theprogramme, the pattern of spread at one site is likely to be uniqueand different organisations will be non-comparable when the sameconventional sub-units are compared.

Unsystematic and unregulated spread patterns across sites posea clear problem for research sampling, as including a whole unit orservice area is likely to include individuals and units that are naı̈veto the programme. The proportion of a sample that is naı̈ve willdiminish as the programme progresses, but if the object is tomeasure the effects of the intervention then including naı̈ve caseswill contaminate the sample. Hence, measures taken on a macrolevel will tap variation that is only partly attributable to the oper-ation of the improvement programme, making effective samplinga complicated process, and necessitating collecting data related tothe degree of exposure to the programme, along with any perfor-mance measures.

Understanding concepts and endpoints for measurement

The issue of what endpoints should be measured by researchinto organisational improvement programmes is a complicatedone. This is perspective-dependent to a certain degree as the resultsof programmes on this scale are relevant to a range of professionalgroups and internal and external stakeholders (Ovretveit, 2002).Authors have commented that the majority of evaluation studies ofquality improvement collaboratives in health care have mainlyfocused upon particular changes in a few measures and short-termimprovements in care processes or outcomes (Solberg, 2005).Ultimately, all care quality or safety improvement work must, bydefinition, impact positively upon the care that is delivered to thepatient, as indicated by desirable changes in some aggregatedmeasure of patient outcome within the specific clinical areas that

Page 4: Studying large-scale programmes to improve patient safety in whole care systems: Challenges for research

J. Benn et al. / Social Science & Medicine 69 (2009) 1767–17761770

form the focus of the improvement work. Measures relating topatient outcomes in this area, however, often require some inter-pretation (Vincent et al., 2008).

The impact of organisational interventions upon clinicaloutcomes appears to be mediated by a range of human, sociocul-tural and organisational factors. Research into the drivers oforganisational performance in health care and non-health caresectors has highlighted organisational structure and processes asimportant (West, 2001). Reviews and theory concerning evaluationof the effects of interventions to improve care quality and patientsafety outcomes have identified a range of upstream factors, such asinstitutional policies, processes, management and workforce vari-ables (Brown & Lilford, 2008; Lilford, Mohammed, Spiegelhalter, &Thomson, 2004). Consequently, research will need to samplemultiple endpoint measures, both outcomes and intermediaryfactors, in order to understand causality relating to organisationalquality and safety interventions (Brown et al., 2008b; Brown &Lilford, 2008).

Complex causal sequences between intervention and patientoutcome in organisational-level quality improvement programmeschallenge researchers on a number of levels, in addition to theidentification of independent and dependent research variables.Simply defining the organisational property that is acquired ordeveloped from a systems viewpoint, as work within an improve-ment programme progresses, is a complex task. So too is under-standing cause and effect over time, especially where the effects ofan improvement programme need to be understood not only interms of short-term system changes, but long-term impact andsustainability of any gains made.

Where large-scale systems are the target of a sustained inter-vention effort, an important consideration in the selection ofevaluative measures is the timescale over which improvement ordevelopment is expected to take place. If impact upon desirableoutcome variables is mediated by more pervasive organisationalsystem effects, this impact may lag in time, to the extent that it mayonly be fully observable a significant period of time after the end ofthe programme. This causes problems in terms of: (1) getting anaccurate understanding of what the full effects of the programmeare, especially over a short study time frame and (2) attributingcausality to the programme itself over longer study time-frames,where other concurrent influences may be present in the inter-vening periods between observations.

Considering the long-term effects of quality improvement pro-grammes raises the question of what overall property of the systemis affected at the organisation-wide level. A recent review of the UKhealth system has shown that organisations vary considerably intheir capacity for clinical systems improvement (Walley et al.,2006). From a care systems perspective, the outcome of primaryinterest from systems-level improvement programmes must be thecapability for continuous improvement within the organisationalhealth care system. Here we refer to the capability to maintain andcontinuously renew systems for ensuring safe, reliable, high-quality care delivery (Benn et al., 2009). Capability at the caresystems level will be defined in terms of clinical, strategic, work-force and organisational elements, and the ways in which they arecombined into coherent quality improvement and safety manage-ment processes. This poses significant challenges in terms of how tomeasure such a complex construct, which will be as multifaceted asthe sociotechnical systems in which it resides. The safety sciencesdomain is grappling with similar high-level properties in a range ofindustries, such as the concept of system resilience (Hollnagel,Woods, & Leveson, 2006) and the optimisation of proactiveorganisational safety management strategies (Amalberti, 2001).

Once any beneficial systems change has been described,research must deal with the concept of ‘‘sustainability’’, a key

component of the rationale underlying the design of many large-scale improvement programmes in health care. In addition tolongitudinal measurement or data collection to investigate thelongevity of programme effects, researchers should aim to under-stand the mechanisms of ‘‘embedding’’ of programme method-ology and successful local innovations within the more permanentfabric and structure of the organisation.

A practical example: research design for the UK SaferPatients Initiative

In order to illustrate how the research challenges outlined aboveinfluence applied research in this area, we turn to a practicalexample from the authors’ current research based upon a large-scale safety improvement programme in the United Kingdom: TheSafer Patient’s Initiative (2004–2008). This work sought to under-stand the ‘journey to safety’ in 24 hospital sites, or how wholehealth care organisations can make significant and sustainableimprovements in the quality and safety of care delivered to patientsacross a range of clinical areas. The Safer Patients Initiative wasa longitudinal intervention designed to impact at the level ofa whole acute care organisation. In line with the systems issuesoutlined previously, it may be regarded as a complex multi-component improvement programme with sociotechnical effectsacross multiple levels of an organisation.

The research programme sought to answer a range of questionsof practical value to a range of stakeholders, including programmesponsors, designers and participants. These centred upon the issuesof: organisational readiness, the process of improvement, pro-gramme impact (and how this can be measured), spread andsustainability of any gains made. This work was designed tocomplement a parallel evaluative programme undertaken by theUniversities of Leicester and Birmingham in the UK, whichemployed ethnographic observation and a prospective controlleddesign focusing upon data extracted from medical record review asthe principal methods of enquiry.

SPI programme design: an organisational-level intervention

The Safer Patients Initiative (SPI) was a demonstration projectdeveloped by the UK Health Foundation in collaboration with theU.S. Institute for Healthcare Improvement, based upon the Break-through Series Collaborative model (Fig. 2) (Institute for HealthcareImprovement, 2004) and U.S. 100,000 Lives Campaign (Wachter &Pronovost, 2006). The aim of the programme was to achievesustainable improvements in the safety and reliability of caredelivered to patients in participating organisations, as measured ona variety of clinical process and organisational performancemetrics, including hospital mortality, infection rates and adverseevents. Following an initial 2-year pilot programme initiated at foursites in 2004, the main programme began in 2006 focusing upon 20participating acute care organisations across the UK.

The SPI programme involved application of continuousquality improvement techniques adapted from industrial andmanufacturing domains (eg. Carey, 2003; Grol, Baker, & Moss,2004; Langley, Nolan, Nolan, Norman, & Provost, 1996). In focus,the programme involved implementation of multiple clinicalpractice changes and support through measurement, structurededucational events, and a focus upon leadership and strategy forpatient safety within participating organisations. Four pro-gramme work streams targetted specific clinical work areas, witha fifth addressing organisational leadership (Fig. 3). The pro-gramme was driven by a series of four collaborative learningsessions led by an expert faculty team. Participating sitescollaborated and shared experience to promote inter-site learning

Page 5: Studying large-scale programmes to improve patient safety in whole care systems: Challenges for research

Fig. 2. Breakthrough Series Collaborative model from the U.S. Institute for Healthcare Improvement (reproduced by permission).

J. Benn et al. / Social Science & Medicine 69 (2009) 1767–1776 1771

and dissemination of emergent best practice in the improvementwork. Support from expert programme leaders was providedthrough email, conference calls and site visits.

SPI research: methodological considerations

The research we developed for SPI was informed by consider-ation of the complexity inherent in intervention at the level of

The Safer Patients Initiative (UK 200

Specific programme elements and care bund

Medicines management Medicines reconciliation Anti-coagulant trigger tool

Critical care Ventilator bundle Central line bundle Glucose control

Perioperative care SBAR Communication protocol Safety briefings Hair removal

General ward care Focus upon hand hygiene Early warning system Multi-disciplinary ward rounds

Organisational leadership Leadership walkarounds & supp Setting a safety agenda and mo

Generic tools/techniques:

CQI/TQM philosophy: semi-autonomo PDSA cycles and small tests of chan Incremental spread methodology Process measurement and analysis u

Programme methodology:

Collaborative learning sessions Online networking and data tools Monthly evaluation and feedback Working and learning in couplets with Expert support (email, site visits, con

Fig. 3. Summary of SPI programme co

whole systems and was, therefore, designed to be sensitive to themethodological considerations that we have discussed previously.The result was a longitudinal, mixed-methods approach, employingmultiple qualitative and quantitative measures. This type of designadopts a range of social sciences research methods that maycomplement experimental designs commonly employed to eval-uate the effects of clinical interventions in the medical setting. Herewe are concerned specifically with the function of the different

4 – 2008)

les, by work area

ort of frontline improvement nitoring progress

us local quality improvement teams ge

sing Statistical Process Control principles

a partner site ference calls)

ntent, generic tools and methods.

Page 6: Studying large-scale programmes to improve patient safety in whole care systems: Challenges for research

J. Benn et al. / Social Science & Medicine 69 (2009) 1767–17761772

research elements in a mixed-methods design, relative to thechallenges of studying programmes of this type and the particularmethodological issues encountered in the development andexecution of the SPI research. In the following, we focus upon thesemethodological issues. The findings from the SPI research arereported elsewhere (Benn et al., 2009).

The research design incorporated three complementaryresearch streams based upon survey, qualitative and quantitativedata sources (Fig. 4). In order to capture longitudinal changes overtime, data collection was planned at three time points: (1) retro-spectively after the preliminary phase of the programme, to pilotand refine the data collection methods and measures, (2) in themain phase of the SPI, at a point immediately following initial workto establish SPI at each site, and (3) after the programme hadfinished, for comparison with time point 2 to detect change.

A survey instrument was constructed in order to assessperceptions of various dimensions associated with organisationalreadiness, impact on safety and quality performance and thesustainability of any gains made through the programme. Variousscale items designed to detect quantitative change over time inorganisational safety climate and capability were developed in thefinal version of the survey, grouped according to five dimensionshypothesised to be sensitive to the SPI programme:

� staff awareness and commitment to safe practices;� senior management support and leadership;� monitoring, measurement and feedback;� communication and teamwork for safety;� learning from failure and improving patient safety.

Due to the organic and incremental nature of diffusion ofthe programme and associated process changes within local sites,as described previously, sampling for this type of programme ispotentially problematic. We used a systematic or quota design wherethe sample was defined as the core local SPI improvement team ateach of the 20 sites participating in the main phase of the SPI, rep-resenting those responsible for attending the programme eventsand tasked with either driving or supporting local improvement

Fig. 4. Research process and programme timeline: multiple con

activities. In order to sample frontline experiences, all personnelinvolved in PDSA testing in each clinical work area were included.The sample sizes in each of the 20 trusts at the second time pointvaried widely between 21 and 100, indicating variability in progresswith ‘‘spread’’, with the average sample size being 65.

The pilot version of the survey used at time point 1 of the studyincluded 18 items designed to test the perceived impact of SPI uponmultiple sub-dimensions of safety climate and capability (Fig. 5),related to potential mediating factors for programme impact upontargeted care quality outcomes. Comparison of results betweenpilot trusts illustrates the variability that exists in terms of startingconditions and developmental course for participating organisa-tions (Benn et al., 2009). At the descriptive level, the data obtainedduring the pilot phase of SPI illustrates how multiple measuresapplied at multiple time points can be used to visualise develop-ment. Distinct profiles of safety climate and capability can be seenwhen comparing organisations in terms of their status before theprogramme, after the programme, and by comparison, their profileof improvement or movement during the time period of theprogramme.

The survey data collection at each of the study time points wascomplemented by interviews with key programme staff to captureopen-ended qualitative information relating to the research aims.By way of an example, one issue that we explored in depth from thequalitative information was that of organisational readiness forquality improvement programmes. The results illustrate the valueof using this type of enquiry to understand complex social andorganisational phenomenon. Open coding of the raw interviewtranscripts yielded a range of relevant data fragments, which werecompared and refined into higher-level conceptual categories usingthe constant comparative method and axial coding (Flick, 2006;Strauss & Corbin, 1998). Finally, theoretical models of key conceptsfor improvement programmes, grounded in participants’ experi-ences, were constructed. An emerging model for organisationalreadiness is illustrated in Fig. 6.

Consideration of the specific research challenges for studyinglarge-scale quality improvement programmes would suggest thatqualitative methods are desirable due to their sensitivity to the

current work streams and time points for data collection.

Page 7: Studying large-scale programmes to improve patient safety in whole care systems: Challenges for research

1

2

3

4

5

6E08.Commitment

E09.Autonomy

E18.Multiprofessional

E17.Briefings

E01.Knowledge

E10.Compliance

E07.Understanding

E02.Training

E04.Senior_support

E03.Mngmt_engagement

E05.Safety_profile

E06.Resources

E11.Monitoring

E12.Detection

E13.Response

E14.Learning

E15.Blame

E16.Evaluation

Post-SPIPre-SPI

1

2

3

4

5

6E08.Commitment

E09.Autonomy

E18.Multiprofessional

E17.Briefings

E01.Knowledge

E10.Compliance

E07.Understanding

E02.Training

E04.Senior_support

E03.Mngmt_engagement

E05.Safety_profile

E06.Resources

E11.Monitoring

E12.Detection

E13.Response

E14.Learning

E15.Blame

E16.Evaluation

Post-SPIPre-SPI

1

2

3

4

5

6E08.Commitment

E09.Autonomy

E18.Multiprofessional

E17.Briefings

E01.Knowledge

E10.Compliance

E07.Understanding

E02.Training

E04.Senior_support

E03.Mngmt_engagement

E05.Safety_profile

E06.Resources

E11.Monitoring

E12.Detection

E13.Response

E14.Learning

E15.Blame

E16.Evaluation

1

2

3

4

5

6E08.Commitment

E09.Autonomy

E18.Multiprofessional

E17.Briefings

E01.Knowledge

E10.Compliance

E07.Understanding

E02.Training

E04.Senior_support

E03.Mngmt_engagement

E05.Safety_profile

E06.Resources

E11.Monitoring

E12.Detection

E13.Response

E14.Learning

E15.Blame

E16.Evaluation

Site D

Site C

Site B

Site A

Post-SPIPre-SPI

Post-SPIPre-SPI

Post-SPIPre-SPI

Post-SPIPre-SPI

Fig. 5. Profile of improvement compared across four Safer Patients Initiative sites, based upon retrospective ratings of safety climate and capability dimensions before and after theprogramme (reproduced from Benn et al., 2009, with permission from Blackwell Publishing).

J. Benn et al. / Social Science & Medicine 69 (2009) 1767–1776 1773

local context. Improvement programmes may succeed or fail due toa range of technical and non-technical issues associated with localimplementation factors such as staff acceptance of process changesand the degree to which new working practices integrate withspecific local systems of work. Qualitative enquiry and ethno-graphic methods can investigate such local variation and how itinteracts with programme elements. Description of the resultinglocal intervention model is, therefore, possible, which is essential ifsuccessful local innovations are to be reproducible and transferableto other contexts and programmes. One approach which we used inour SPI research is to focus upon specific case studies of improve-ment work at the microsystems level, in order to capture processand developmental course over time.

The third data source utilised in the SPI study comprisedquantitative data collected against programme measures collectedlongitudinally at each site to monitor progress. Recent reviews ofresearch have called for more attention to be paid to measuringprocess factors within improvement programmes (Schouten et al.,2008; Solberg, 2005). Process measurement involves longitudinalor time series measurement of process parameters that are causallylinked to desirable outcomes, such as improved infection rates oradverse event rates. A large body of literature now exists on theapplication of Statistical Process Control principles to health caresystems to support improvement work (eg. Benneyan, Lloyd, &Plsek, 2003; Carey, 2003; Thor et al., 2007). Fig. 7 provides exam-ples of four run charts comprising time series data for process and

outcome measures in the SPI programme. Run charts are used asa simple means of visually analysing trends that can be utilised byfrontline improvement teams.

The implementation of data collection and reporting structuresfor microsystems level clinical data on this scale provides certainresource and practical challenges for organisations, where suchprocesses have not existed before. Review of process metrics fromSPI revealed that speed and ease of uptake, along with final capa-bility for process measurement was highly variable between sites.This was evident from a number of data quality limitations arisingin the first phase of the programme, including: insufficient datapoints and lack of sufficient baseline periods, changing samples orsampling strategies mid-time series, inadequate or missing anno-tation describing which changes were implemented and when,amongst others.

Monitoring the impact of an intervention or series of successiveinterventions upon discrete clinical microsystems over time isappealing from a research design viewpoint. Local variabilitybetween sites, even sub-units within sites, means that aggregationof data across units may be inappropriate. The fact that pro-grammes take place within a long-term time frame means thataggregating data over time will additionally mask more subtlevariation and patterns of cause and effect. The SPI programmeemployed around 40 standard measures, depending upon theextent and range of improvement activities within a single site,thus providing a micro-level perspective upon systems change that

Page 8: Studying large-scale programmes to improve patient safety in whole care systems: Challenges for research

Fig. 6. Qualitative analysis of organisational preconditions and programme readiness factors for the Safer Patients Initiative, with example quotations from interviews withprogramme team leaders.

J. Benn et al. / Social Science & Medicine 69 (2009) 1767–17761774

was sensitive to variable local implementation and spread patterns.Process measurement, therefore, provides a potentially valuablesource of data for research to understand the effects of improve-ment programmes within specific clinical microsystems.

Drawing upon our research experience with the SPI programme,a key implication of complexity in large-scale improvement pro-grammes is the degree to which an intervention programme modelat this level may be practically specified and accurately reproduced

Fig. 7. Example run charts showing time series data for Average Length of Stay in Intensi

in local implementation within participating organisations. Localadaptation may arise due to incompatibilities between the pro-gramme and a specific local context, for example arising fromimporting a U.S. programme model into the UK health systemcontext, making some degree of interpretation necessary. Wherea campaign or programme spans large geographic or even nationalboundaries, differing local external regulation and political envi-ronment will influence internal processes. In the SPI, the

ve Care, along with other process and outcome parameters from other clinical areas.

Page 9: Studying large-scale programmes to improve patient safety in whole care systems: Challenges for research

J. Benn et al. / Social Science & Medicine 69 (2009) 1767–1776 1775

organisations selected were from the four nations of the UnitedKingdom and, therefore, considerable differences in organisationalregulatory environment, links with external quality agencies andhistory of national initiatives and improvement work were present.

In our experience of research in SPI, we encountered localvariation and adaptations in a number of elements: (a) the wayprocess measures were defined and data was collected at each site,(b) the rate and pattern of spread at each site, (c) the approachtaken to programme governance and reporting at each site, (d) thepositioning and prioritisation of SPI aims and activities incomparison with other strategic and programme objectives, and (e)the exact profile of change elements and work practice modifica-tions implemented by each site. Given such variability, although thebasic SPI programme was common across sites, its local instantia-tion was heterogeneous to a degree and the possibility, therefore,exists that any observed programme outcome is a product of theefficacy of a specific local variation of the programme, rather thanthe initial programme model itself. Furthermore, with local adap-tation and integration of a programme into the structure andprocesses of an organisation, it becomes difficult to separate out theonset and cessation of a specific intervening effort, from moregeneral continuous organisational development activities, asSchouten et al. (2008) have attested.

Research limitations

We have described how one possible research design may meetthe challenges of complexity in a large-scale intervention, but aswith all methodological decisions under resource constraints, thereality is that in selecting one approach over another, theresearcher makes various trade-offs. The relative virtues of quali-tative and quantitative approaches and their combination inmixed-methods designs have been discussed elsewhere (Bryman &Teevan, 2001; Pope & Mays, 1995, 2000). So too has the suitabilityof various pseudo-experimental and evaluative designs for large-scale improvement programmes (Brown et al., 2008a; Ovretveit,1998, 2002). From the perspective of the approach used in our SPIresearch, sensitivity to real-world complexity and contextual vari-ation comes at a price and it is to some of these draw-backs that wenow turn.

An important question for mixed-methods designs is how tosynthesise the findings across different data sources and analysesto build a coherent body of knowledge on a subject (Mays, Roberts,& Popay, 2001). This can be problematic where data sources are ofdifferent types or provide different levels of evidence. Theresearcher must decide what weight to assign to differentcomponents of the work and whether to use qualitative researchsynthesis, meta-analysis or another method. In addition to theburden placed upon the researcher to make sense of findings acrossmultiple sub-studies, the methods used in mixed-methods designsthat might include ethnography, interviews, large-sample surveysand qualitative analysis, can be very resource intensive, comparedwith statistical analysis of routinely generated clinical data.

Finally, the objective of mixed method designs in the applicationdescribed in this article is to provide insight into complexity,context and processes that are causally remote to more tangibleoutcome measures. Here, rigour and confidence in results is ach-ieved through combination of perspectives and data sources, ratherthan any pseudo-experimental control of extraneous variation. Ifuse of prospective controlled designs are valid in evaluation ofsystems-level interventions, then there is a clear trade-off betweenscientific rigour in design and the ability to describe complexity,causal mechanisms and dynamic processes that evolve over time.When studying complex adaptive sociotechnical systems, such ashealth care organisations, which may limit researchers’ ability to

match control cases or limit extraneous variation, a combination ofresearch methods drawn from both clinical and social scienceresearch traditions, is likely to be most effective. To the extent thatan improvement programme, either as designed or as imple-mented, is unique in its complexity, the generalisability of researchfindings to other instances will be limited and qualitative ordescriptive work will be an important component of the study.

Implications and conclusions

We have discussed a number of methodological issues pertinentto the design of research into large-scale improvement pro-grammes and have considered these in the context of a practicalexample in the UK. The characteristics, inherent complexity andmulti-level nature of these programmes as systems for change posesome unique challenges for researchers. This is particularly the casewhere programme effects span levels within a system; varydynamically or are cumulative over time; where identifyingappropriate sample populations to study is problematic; or wherevariability between organisations in terms of start conditions,situational context or concurrent programmes and other externalinputs act as confounding factors in understanding the impact ofa programme.

Variation in the local context for implementation of animprovement programme gives rise to variation in the course thatdevelopment may take at different levels of the system: this is astrue when comparing microsystems within the same site as whencomparing different sites. Research must be sensitive to complexvariation, through employing multiple measures, collecting datalongitudinally and utilising descriptive qualitative techniques tocapture specific interactions between programme and context overtime. From a practical viewpoint, the resulting knowledge willsupport designers in developing programmes to fit specificcontexts or in selecting organisations with the preconditions tomeet specific programme requirements.

When considering the long-term, sustained impact of animprovement programme, researchers must consider how todefine and measure the capability for continuous safe and reliablecare as a property of the whole care system. This requires a socio-technical perspective, rather than focusing upon one microsystem,disciplinary perspective or single level of the system. Thisperspective necessitates understanding of how multiple simulta-neous changes at both the clinical microsystem and organisationallevels translate into overall effects on systems-level performance,such as shifts in culture and improvement in hospital-level patientsafety indicators and care quality metrics. Establishing clear causeand effect relationships is problematic where the causal chainbetween intervention and outcome may take multiple routes, overdifferent timescales, and be mediated by a range of technical,sociocultural and organisational factors. Where clear cause andeffect relationships between programme design characteristics andimpact upon care delivery systems can be established, the fact thatchange programmes of this type are often underspecified and varydynamically over time means that they are not easily repeatable,potentially hampering effective replication of implementationsthat have proven successful in the past.

Research designs that are sensitive to the issues raised in thisarticle are likely to be multi-modal in design, longitudinal in scope,use a range of measures and require effective synthesis processes.Our experiences conducting research into the Safer PatientsInitiative programme demonstrates how a mixed-methods, longi-tudinal perspective can resolve several issues arising from thecomplexity involved in intervening at the level of whole caresystems. Selection of appropriate research methods to study thistype of programme is likely to depend upon the researcher’s

Page 10: Studying large-scale programmes to improve patient safety in whole care systems: Challenges for research

J. Benn et al. / Social Science & Medicine 69 (2009) 1767–17761776

perspective and purpose in conducting the research, as well as thetype and scale of programme under consideration.

In this paper, we have aimed to describe universal challengesthat any definitive research design must be able to resolve and in sodoing provide a basis for further methodological development inthis area. Accommodating these issues may be problematic, but thegoal of understanding how whole health care organisations canacquire the capability to deliver consistent, high-quality andfailure-free care to patients, remains paramount.

References

Amalberti, R. (2001). The paradoxes of almost totally safe transportation systems.Safety Science, 37(2–3), 109–126.

Ayers, L. (2005). Quality improvement learning collaboratives. Quality Managementin Health Care, 14(4), 234.

Barach, P., & Johnson, J. K. (2006). Understanding the complexity of redesigning carearound the clinical microsystem. Quality and Safety in Health Care, 15(Suppl. 1),i10–i16.

Batalden, P., Nelson, E., & Godfrey, M. (2007). Quality by design: A clinical micro-systems approach. Jossey-Bass.

Bate, P., Mendel, P., & Robert, G. (2008). Organizing for quality: The improvementjourneys of leading hospitals in Europe and the United States. Radcliffe Publishing.

Bate, P., Robert, G., & McLeod, H. (2002). Report on the breakthrough collaborativeapproach to quality and service improvement within four regions of the NHS: Aresearch based investigation of the orthopaedic services collaborative within theEastern, South and West, South East and Trent Regions. Health ServicesManagement Centre: University of Birmingham.

Benn, J., Burnett, S., Parand, A., Pinto, A., Iskander, S., & Vincent, C. (2009).Perceptions of the impact of a large-scale collaborative improvement pro-gramme: experience in the UK Safer Patients Initiative. Journal of Evaluation inClinical Practice, 15(3), 524–540.

Benneyan, J. C., Lloyd, R. C., & Plsek, P. E. (2003). Statistical process control as a toolfor research and healthcare improvement. Quality and Safety in Health Care,12(6), 458–464.

Bradley, E. H., Holmboe, E. S., Mattera, J. A., Roumanis, S. A., Radford, M. J., &Krumholz, H. M. (2001). A qualitative study of increasing beta-blocker use aftermyocardial infarction: why do some hospitals succeed? JAMA, 285(20), 2604–2611.

Brown, C., Hofer, T., Johal, A., Thomson, R., Nicholl, J., Franklin, B. D., et al. (2008a). Anepistemology of patient safety research: a framework for study design and inter-pretation. Part 2. Study design. Quality and Safety in Health Care, 17(3), 163–169.

Brown, C., Hofer, T., Johal, A., Thomson, R., Nicholl, J., Franklin, B. D., et al. (2008b).An epistemology of patient safety research: a framework for study design andinterpretation. Part 3. End points and measurement. Quality and Safety in HealthCare, 17(3), 170–177.

Brown, C., & Lilford, R. (17 December 2008). Evaluating service delivery interven-tions to enhance patient safety. BMJ, 337, a2764.

Bryman, A., & Teevan, J. J. (2001). Social research methods. Oxford University PressNew York.

Carey, R. G. (2003). Improving healthcare with control charts: Basic and advanced SPCmethods and case studies. Milwaukee, WI: ASQ Quality Press.

Dept. Health. (2000). An organisation with a memory. London: The Stationary Office.Flick, U. (2006). An introduction to qualitative research (3rd ed.). London: Sage.Garside, P. (1998). Organisational context for quality: lessons from the fields of organ-

isational development and change management. Quality in Health Care, 7, S8–S15.Gollop, R., Whitby, E., Buchanan, D., & Ketley, D. (2004). Influencing sceptical staff to

become supporters of service improvement: a qualitative study of doctors’ andmanagers’ views. Quality and Safety in Health Care, 13(2), 108–114.

Grol, R., Baker, R., & Moss, F. (2004). Quality improvement research: Understandingthe science of change in health care. London: BMJ Books.

Gustafson, D., Sainfort, F., Eichler, M., Adams, L., Bisognano, M., & Steudel, H. (2003).Developing and testing a model to predict outcomes of organizational change.Health Services Research, 38(2), 751–776.

Hollnagel, E., Woods, D. D., & Leveson, N. (2006). Resilience engineering: Conceptsand precepts. Aldershot, UK: Ashgate.

Institute for Healthcare Improvement. (2004). The breakthrough series: IHI’scollaborative model for achieving breakthrough improvement. Diabetes Spec-trum, 17(2), 97–101.

Kilo, C. (1998). A framework for collaborative improvement: lessons from theInstitute for Healthcare Improvement’s Breakthrough Series. Quality Manage-ment in Health Care, 6(4), 1–13.

Kohn, L. T., Corrigan, J. M., & Donaldson, M. S. (2000). To err is human: Buildinga safer health system. Washington: National Academy Press.

Langley, G. J., Nolan, K. M., Nolan, T. W., Norman, C. L., & Provost, L. P. (1996). Theimprovement guide: A practical approach to enhancing organizational perfor-mance. San Francisco: Jossey-Bass Publishers.

Lilford, R., Mohammed, M. A., Spiegelhalter, D., & Thomson, R. (2004). Use andmisuse of process and outcome data in managing performance of acute medicalcare: avoiding institutional stigma. The Lancet, 363(9415), 1147–1154.

Mays, N., Roberts, E., & Popay, J. (2001). Synthesising research evidence from studiesof the delivery and organisation of health services. Studying the organisation anddelivery of health services: Research methods. London: Routledge.

Mittman, B. S. (2004). Creating the evidence base for quality improvementcollaboratives. Annals of Internal Medicine, 140(11), 897–901.

Mohr, J. J., & Batalden, P. B. (2002). Improving safety on the front lines: the role ofclinical microsystems. Quality and Safety in Health Care, 11(1), 45–50.

Nolan, T. (2007). Execution of strategic improvement initiatives to produce system-level results. IHI Innovation Series. Cambridge, MA: Institute for HealthcareImprovement.

Ovretveit, J. (1998). Evaluating health interventions: An introduction to evaluation ofhealth treatments, services, policies and organizational interventions. Maiden-head: Open University Press.

Ovretveit, J. (2002). Action evaluation of health programmes and changes: A handbookfor a user-focused approach. Oxford: Radcliffe Medical Press.

Ovretveit, J., & Gustafson, D. (2002). Evaluation of quality improvement pro-grammes. Quality and Safety in Health Care, 11(3), 270–275.

Pettigrew, A. M. (1990). Longitudinal field research on change: theory and practice.Organization Science, 1(3), 267–292.

Plsek, P. E., & Greenhalgh, T. (2001). Complexity science: the challenge ofcomplexity in health care. BMJ, 323(7313), 625–628.

Pope, C., & Mays, N. (1995). Qualitative research: Reaching the parts other methodscannot reach: An introduction to qualitative methods in health and health servicesresearch. British Medical Association. pp. 42–45.

Pope, C., & Mays, N. (2000). Qualitative research in healthcare. London: BritishMedical Journal.

Schouten, L. M. T., Hulscher, M. E. J. L., Everdingen, J. J. E. V., Huijsman, R., & Grol, R. P. T. M.(2008). Evidence for the impact of quality improvement collaboratives: systematicreview. BMJ, 336(7659), 1491–1494.

Shojania, K. G., & Grimshaw, J. M. (2005). Evidence-based quality improvement: thestate of the science. Health Affairs, 24(1), 138–150.

Sinclair, M. A. (2007). Ergonomics issues in future systems. Ergonomics, 50(12),1957–1986.

Solberg, L. I. (2005). If you’ve seen one quality improvement collaborative. Annals ofFamily Medicine, 3(3), 198–199.

Stetler, C., McQueen, L., Demakis, J., & Mittman, B. (2008). An organizationalframework and strategic implementation for system-level change to enhanceresearch-based practice: QUERI Series. Implementation Science, 3(1), 30.

Strauss, A. L., & Corbin, J. M. (1998). Basics of qualitative research: Techniques andprocedures for developing grounded theory. Sage Publications Inc.

Thor, J., Lundberg, J., Ask, J., Olsson, J., Carli, C., Harenstam, K. P., et al. (2007).Application of statistical process control in healthcare improvement: systematicreview. Quality and Safety in Health Care, 16(5), 387–399.

Vaughn, T., Koepke, M., Kroch, E., Lehrman, W., Sinha, S., & Levey, S. (2006).Engagement of leadership in quality improvement initiatives: Executive QualityImprovement Survey results. Journal of Patient Safety, 2(1), 2–9.

Vincent, C., Aylin, P., Franklin, B. D., Holmes, A., Iskander, S., Jacklin, A., et al.(13 November 2008). Is health care getting safer? BMJ, 337, a2426.

Wachter, R. M., & Pronovost, P. J. (2006). The 100,000 lives campaign: a scientificand policy review. Joint Commission Journal on Quality and Patient Safety, 32(11),621–627.

Walley, P., Rayment, J., & Cooke, M. (2006). Clinical systems improvement in NHShospital trusts and their PCTs: A snapshot of current practice. Institute for Inno-vation and Improvement & The University of Warwick.

Weiner, B. J., Amick, H., & Lee, S. Y. D. (2008). Review: conceptualization andmeasurement of organizational readiness for change. A review of the literaturein health services research and other fields. Medical Care Research and Review,65(4), 379–436.

West, E. (2001). Management matters: the link between hospital organisation andquality of patient care. Quality and Safety in Health Care, 10(1), 40–48.