Progress/regress in performance measurement systems · Both are indeed commensurable (Donaldson...

28
IRSPM VI Edinburgh - Submission Public Management Review Progress and Regress in Performance Measurement Systems Prof. Dr. Geert Bouckaert University of Leuven, Belgium [email protected] Wouter van Dooren University of Leuven, Belgium [email protected] University of Leuven Public Management Institute E. Van Evenstraat 2A 3000 Leuven Belgium 1

Transcript of Progress/regress in performance measurement systems · Both are indeed commensurable (Donaldson...

IRSPM VI Edinburgh - Submission Public Management Review

Progress and Regress

in Performance Measurement Systems

Prof. Dr. Geert Bouckaert University of Leuven, Belgium

[email protected]

Wouter van Dooren University of Leuven, Belgium

[email protected]

University of Leuven Public Management Institute E. Van Evenstraat 2A 3000 Leuven Belgium

1

Abstract

Performance measurement is not a new phenomenon. Though very popular in New Public

Management, its roots in the United States go back at least a century. In Europe too,

considerable experience with performance measurement may be found. However, the

performance measurement history is not a linear path of progress with increasingly better

measurement systems. Regress similarly occurs. Until now, progress and regress are mainly

assessed on intuitive grounds. In this article, we develop an analytic framework to gauge

progress and regress: what is it?, when does it occur?, and why? Therefore, we grasp

performance measurement in a supply and demand scheme and delineate the concept of

performance information. Finally, we explore some theoretical approaches that may explain

supply and demand curves.

Key words: Performance measurement, history of performance measurement, supply and

demand of performance information, functions and dysfunctions, organisational theory

2

Progress and Regress in Performance Measurement Systems.

Performance measurement has not always been of similar importance in public administration.

Intuitively, we presume that progress and regress in performance measurement does occur. However,

can we gauge progress and regress through research instead of intuition? If so, what can we learn

from progress and regress about the role that performance measurement systems play in public

administration? Finally, what does this tell about public administration in itself? Beneath, we present a

search for and an embryonic development of an analytical framework.

The first paragraph describes a short history of performance measurement. We provide some

anecdotic evidence from the United States of America and Europe on progress and regress in

performance measurement systems throughout history. After defining some concepts, the third

paragraph describes what progress and regress might signify by means of a supply and demand

scheme. We assume that performance information will be used when supply meets demand. Thus, the

correspondence between supply and demand will prompt the consumption of the good offered, i.e. the

performance information. In order to track progress and/or regress, we need to know in more detail

what constitutes demand and supply of performance information. In addition, some considerations

regarding the measurement technology are put forward. However, these analyses still do not explain

why demand and/or supply augment or decline. In the fourth paragraph, we refer to some theories that

may clarify this issue. Explanatory theories may be found in both functionalist and constructionist

approaches. Both are indeed commensurable (Donaldson 1985, pp.35-46) and possibly necessary to

explain change in public administration in general and progress and/or regress in performance

measurement in particular (see e.g. Pollit 2001 on clarifying convergence).

§1. A short history of performance measurement.

The New Public Management (NPM) actively emphasizes the significance of performance

measurement in relation to the introduction of new management tools in government (Goñi 1991;

Naschold 1996; OECD 1997; Williams 2000). Indeed, accurate performance information is needed for

the implementation of management instruments such as pay for performance, performance contracts

or performance budgets. (Rainy 1998; Caiden 1998; Hatry 1999) However, NPM did not procreate the

idea of measuring government performance. Indeed, it is not a new perspective. In both Europe and

the United States of America, at least a century of performance measurement efforts lay behind us

3

(Bouckaert, 1995; Williams 2000). We describe in brief the history of performance measurement in the

US and Europe, whereby we assess the changes using the generic concepts of efficiency and effect.

In the beginning of the century, the first considerations with regard to government performance in the

USA were made. Government must be not only honest, but efficient as well (Ridley and Simon 1938).

Administrative inefficiencies were considered a consequence of political interventions in the

administration. In response, the developmental path of administration theory was towards a value free,

scientifically based study with a mission of economy and efficiency. Accordingly, separation of politics

and policy on the one hand and administration on the other hand were the solutions for an increased

performance (see e.g. Wilson 1887; Goodnow 1900). The focus was on input, activities, output and

efficiency of the bureaucratic apparatus, not on effectiveness and outcomes. In practice, the search for

more efficiency in government resulted in the creation of the Bureau for Municipal Research in New

York (°1906), which gave inspiration for many other bureaus of government research. On the national

level, there was the Commission on Economy and Efficiency (°1912) and the privately sponsored

Institute for Government Research (°1916).

From the 1940s, politics and administration became progressively more entwined (Bouckaert 1995).

Scientific management shifted towards a more broad-spectrum management approach. “More

important than efficiency in carrying out given tasks were initiatives, imagination and energy in the

pursuit of public purposes. Those purposes were political and the administrators charged with

responsibility for them, as well as many of their subordinates, had to be politically sensitive and

knowledgeable” (Mosher, 1968 pp. 79-80). Correspondingly, the focus on performance appraisal now

included effectiveness. The administrators had a larger share in policy development, which resulted in

techniques and systems such as the planning programming budgeting system (PPBS) and later on

management by objectives (MBO) and zero-based budgeting (ZBB) (Shick, 1966; Wildawsky 1969).

Important programmes were the recommendations of the Hoover Commissions (1949) on

performance budgets and the 1962 Bureau of the Budget’s productivity project. On the municipal level,

the International City Management Association published a first checklist on how to improve municipal

services (1958).

4

In the 70’s the notion that public administration is a profession persisted, but private sector inspiration

and eagerness to implement private sector techniques resulted in a new stage. “Public administration

became public management” (Perry and Kraemer 1983). Interference between administration and

politics resulted in both administrative and political managers. The latter initiated the attention for

performance in the 1970s. New public administration should be value driven, aiming the

professionalism to values of social equity (e.g. Frederickson 1971). Public administrators should thus

thrive to a higher yield for the public money spent (cost effectiveness). Consequently, the focus in

performance measurement was on both effectiveness and efficiency.

In the 1980s, there was a new direction in the performance movement, mainly because of taxpayers’

pressure in a context of rising deficits but as well inspired by the ideologically motivated approach of

cutting public expenditure (Bouckaert 1995). There was academic debate on whether public and

private sector were alike or not (Allison 1980; Mosher 1982; Moe 1987). However, this debate was

overruled by the public budget deficits that called for savings. The private sector was considered more

productive and efficient, and served as an important example for public sector action. The main

objective of performance measurement practices was to reveal where to increase efficiency and/or to

cut spending. Productivity improvements and/or inefficiencies would allow saving on the budget.

Performance measurement was the tool for tracing and proving these inefficiencies. At any rate, the

emphasis on savings was the dominant objective for performance measurement in the 1980s. The

Office of Management and the Budget, and the General Accounting Office put productivity and

performance high on their agenda and became leading actors in progressing performance.

In the middle and the late 1990’s, government performance was increasingly seen as a competitive

advantage for the economic performance and adding up to societal performance. Interestingly, the

alleged new paradigm in public administration was confronted with traditional bureaucracy and not

with the “government by the private sector” efforts of the 1980s (Barzelay 1992; Osborne and Gaebler

1993). Performance measurement emphasised both efficiency and effectiveness gains (OECD 1996;

Bouckaert, Hoet and Ulens, 2000). Savings were not anymore the only objective of performance

measurement and management in the public sector. Effectiveness gained importance. Among other

initiatives, the National Performance Review (NPR) (1992) and the Government Performance and

5

Results Act (GPRA) (1993) were two major landmarks in the renewed integration of efficiency and

effectiveness concerns in performance measurement and management systems (National

Performance Review 1993; Fisher, 1994; Pollit and Bouckaert, 2000). The Bush administration

apparently follows the performance measurement tradition by amongst others linking budgets to

performance (Office of Management and the Budget 2001).

In summary, the United States has a long history in performance measurement. However, progression

was not a straightforward path of glory. The focus shifted from inputs to effects. Tentative proof is

given that quantity, quality and usefulness of performance information shifted significantly and in a

non-linear manner throughout the last century. Finally, the US history points to the embedment of

performance measurement in the prevalent concepts of public administration.

The European scene is different and also more complex and diverse. Up until now, it still has to be

mapped. Below, we give some anecdotic evidence. Europe indeed has significant experience in

performance measurement. The bulk of performance measurement initiatives are to be found in the

Anglo-Saxon and Scandinavian countries (Pollitt & Harrison 1994; Zifcak 1994; Pollit and Bouckaert

2000). Nevertheless, also other European countries have significant experience. In this paragraph, we

will shortly describe the less well-known examples. In France, for example there were the PPBS

inspired Rationalisation des Choix Budgétaires (°1969). In the 90’, the centres de responsabilité

(°1989) and the contrats de service (°1995) were important initiatives to increase responsibility and

accountability based on performance information. Recently the programme pluriannual de

modernisation broadened vertical dimension of the contrats de services and the centres de

responsabilité with a horizontal scope by using contracts between functional ministers on the one

hand and the ministers of the budget and the public service on the other hand (Chaty, 1999;

Guyomarch 1999; Moniolle 1999). In Germany, the ‘Neue Steuerung’ (new steering) is a set of

modernisation initiatives initiated in the late1980s and the 1990s by local government that stemmed

from budget deficits and financial difficulties. These NPM inspired reforms followed citizen oriented

reforms in the 1980s that originated from the perceived gap between citizens and their government

(Hendriks & Tops 1999). Performance information plays an important role in this ‘new steering’ for

both savings and accountability to the public. In the middle and late 1990s, there was a proliferation of

6

initiatives over different municipalities and different tiers of government (Hill & Klages, 1996a; Hill &

Klages, 1996b; Klages, 1999). In the Netherlands, the interest in performance of the public sector has

been triggered by the ‘Commissie voor Beleidsanalyse’ (Commission for Policy Analysis) (°1971). The

performance information has for a great deal been linked to the budget. The first government wide

initiative was the ‘kengetallen’- initiative (performance indicators) (°1990). The ‘kengetallen’ were part

of the budget of the different ministries. In 1995, there was devolution of tasks to more independent

agencies. The ‘kengetallen’ have to be used by the agencies for accountability to central government

(Janssens & Maessen 1996; Algemene Rekenkamer 1997). The most recent initiative is the VBTB

reform which stands for ‘from policy budget to policy audit’ (Tweede Kamer der Staten Generaal 1998-

1999). In Belgium, management reform in general and the focus on performance measurement in

particular has traditionally been in the shade of lengthy processes of institutional reform. It was only in

1990s, when a substantial decentralisation was accomplished that the Flemish region took the first

initiatives. Some important programmes were ‘Doelmatigheidsanalyse’ (effectiveness analysis), the

‘Vlaamse Regionale Indicatoren’ (Flemish Regional Indicators) and the development of ‘Management

Informatiesystemen’ (Management Information systems). Recently, also the federal government

started a reform trajectory (Bouckaert & Auwers 1999; Bouckaert, Hoet & Ulens, 2000).

This brief history of performance measurement, in both the United States of America and some

European countries, illustrates that performance measurement has extensive antecedents. Intuitively,

one feels that performance information played different roles with diverse importance throughout the

administrative history, whereby change is not a linear process. The text below presents a first step in

the development of an analytical framework to assess progress and regress in performance

measurement. Two questions will be addressed: ‘what does progress and regress mean?’ and ‘why

does progress or regress occur?’. First, we refine some concepts.

§2. Some concepts

Performance measurement system: Fuchs et al. (1988) define a system as a set of mutually

dependent elements and relations. Sharkansky (1975) describes an administrative system as the

environment, inputs, conversion process, outputs and feedback that relate and interact with each other

around an administrative unit. Combining the definitions, we describe performance measurement

7

systems as a set the mutually dependent elements (inputs, conversion processes, output and

feedback loops within the environment), whereby the output consists of performance information.

Performance measurement process: Smith (1996) identifies three overarching analytical steps in the

assessment of a system’s effectiveness: measurement, analysis and action. Undoubtedly, these steps

may be refined into more detail on how to gather and analyse data and on how to use this information

in the organisation’s operations (e.g. Hatry 1999). In this text, the performance measurement process

consists of the first two steps, i.e. measurement and analysis. Then, performance data are the bits

resulting from measurement (e.g. O³ proportion in the air, number of cars, km² of wood). The analysis

turns the data into information. Finally, performance information is the body of analysed data, ordered

to effect choice (Wildawsky 2000). When we address later on the supply of performance information,

we refer to this conception of performance information, i.e. the output of the measurement process

that consists of measurement itself and its analysis.

§3. What do progress and regress mean?

Before we can address the question why progress or regress takes place, we need to comprehend

what it is. We assume that genuine progress implies a better match between supply of and demand for

performance information. When supply of and demand for performance information correspond better,

there will be increased consumption. Thus, we define the progression from lower to higher

consumption of performance information, as progress. Ceteris paribus, we delineate the regression

from higher towards lower consumption of performance information, as regress. Table 1 shows the

different positions and the trajectories of progress, i.e. progression from A to D directly, or indirectly

through B or C, and from D1 to D2/D3/D4.

8

No demand Demand

Weak Strong

No supply

Weak

SupplyStrong

A

C1

C2 D3 D4

D2D1

B2B1

Table 1: Progress in performance measurement as the resultant of supply of and demand for information

Position A: There is no supply of and no demand for performance information. Progress would imply

advancement from position A to position D. However, also a movement to positions B and C is

possible. The issue is not on the agenda. Moreover, there is even not an awareness that it could be an

issue (e.g. AIDS in its early stage).

Position B: There is no supply of, but there is demand for information. There is intent to use

performance information. However, it is hampered by the lack of performance information and

measurement. This results in a demand frustration zone (e.g. politicians ask for data on citizens trust

and client satisfaction which is not available).

Position C: There is no demand for, but there is supply of information. Performance measurement is

developed but the information is not used. This results in a supply frustration zone (e.g. early warning

information which is uncared for).

Position D: There is demand for and offer of performance information. Even now, demand and supply

both may be weak or strong (e.g. demand/supply for regular management versus demand/supply for

the policy cycle).

Thus, we assess progress and regress with the economic concepts of supply and demand of

performance information. Before we are able to consider the reasons for change in performance

information consumption (be it progress or regress), we first need some refinements. First,

9

performance information is multi-faceted. It has several characteristics that may differentiate one type

of performance information from another. Some of these distinctions are dealt with below. Secondly,

we point to some important issues with regard to the production technology of performance

information.

Performance information as a multi-dimensional product.

Since we consider performance information as the product of a performance measurement system

and since we see progress as a better match of supply of and demand for that very product, it is

necessary that the comparison is homogeneous in its quality and quantity dimensions. Yet,

performance information is not. We risk comparing apples and oranges when we consider

performance information as one-dimensional. In other words, one does not meet the demand for a

Rolls Royce by providing a Mini Cooper, though they are both cars. Various facets are briefly

discussed beneath. Table 2 summarises this section.

Some facets of performance information 1. Performance information may differ in the coverage of the input-output-effect process. 2. Performance information may learn about the service itself but also about the perception and the expectations of the public concerning the service. 3. Performance information may differ in the frequency of the measurement efforts. 4. Performance information may differ in the coverage rate of organisational activities, goals, budgets or personnel. 5. Performance information may differ in its external focus, i.e. on side effects and the environment. 6. Performance information may differ in the possibility to be aggregated and/or to be disaggregated. 7. Performance information may differ in the degree of systemisation (e.g. by the use of quality models such as BSC, EFQM, CAF). 8. Performance information may have a scope on quality or quantity of public service. 9. Performance information may be or not be contrasted with standards.

Table 2: Some facets of performance information

1. Performance information may differ in the coverage of the input-output process. Does the

measurement info deal with inputs, processes, outputs, intermediate outcomes, effects, efficiency

(input/output), productivity (output/input), effectiveness (output/effect) or cost-effectiveness

(input/effect)? At any rate, throughout history the focus shifted repeatedly since the need for savings,

improved services or transparency shifted (supra). It is understood that providing input information will

not meet an organization’s demand for performance information in order to make a strategic plan.

Supply and demand need to be coordinated.

10

2. Performance information may learn about the service itself but also about the perception and the

expectations of the public concerning the service. Usually, an organisation needs information on the

all three aspects. An organisation may ameliorate the service. However, when perception of the public

does not follow, the efforts mostly will not pay-off by an increased satisfaction of citizens with public

service. Moreover, satisfaction might be influenced by other factors than performance, such as overall

trust in government. Until now, there is no clear-cut evidence on the relationship between

performance, satisfaction and trust. Swindell and Kelly (2000) explored the relation between citizen

satisfaction data and performance measures. They found that citizens are more able to evaluate

services than some might suggest. Consequently, performance improvement in service delivery may

lead to a higher level of satisfaction. Yet, there lays a great deal of research in the exploration of the

relations in the triangle performance-trust-satisfaction and what is more, there needs to be a match

between supply and demand of information on the three concepts.

3. Performance information may differ in the frequency of the measurement efforts. The time

perspective may range from over several years to annual, monthly, weekly or daily measurement. At

the extreme, there is continuous measurement. Ongoing, repeated measurement efforts allow

comparisons over time that enrich the analysis compared to a one time, ad hoc measurement effort

(Morley, Bryant & Hatry, 2001). Again, demand and supply need to be matched.

4. Performance information may differ in its coverage rate. Measurement efforts may focus on a

limited number of policy fields or departments. Nevertheless, a more extensive measurement system

will comprise more policy fields, more departments, a higher percentage of the budget or a higher

percentage of the workforce. At municipal level, e.g. in the USA and the U.K., there is substantial

evidence of performance measurement in a broad range of policy fields (Hatry 1992; Ammons, 1996;

Audit Commission 2001). A remarkable example of an attempt to measure more extensively is to be

found at central level in the Netherlands. The ministries had to indicate the percentage of the

expenses that were covered by performance indicators. For example in 1997, 72% of the measurable

expenses had been accounted for by performance measures (Algemene Rekenkamer 1997; Sorber

1999). However, not all of the expenses were considered eligible for this performance-based review.

The ministries and the Algemene Rekenkamer (Audit office) had to agree on the expenses that could

11

be exempted from the calculation of the coverage rate. At the end of the day, the attention was

deviated towards the discussion whether an expense could be accounted for in a meaningful way or

not, instead of whether a ministry measured well or not (Bouckaert, Hoet and Ulens 2000). Supply and

demand for a particular coverage rate needs to be discussed since measurement cost and benefits

need to be balanced.

5. Performance information may differ in its external focus. The information may encompass indicators

on the core business of the organisation, indicators on the environmental, societal facets and side

effects, or on both. In Canada, for example are some recent initiatives on providing societal indicators

and making them useful for parliamentarians (Bennet et al. 2001; President of the Treasury Board

2001). Equally, international institutions developed cross-country indices such as the World

Development Indicators by the World Bank and UN affiliated institutions, but also several countries

and regions took initiatives assessing societal development, the environment and the quality of life

(Carr-Hill and Lintott with Bowen and Hopkins 1996) (Eckersley, 1998) (World Bank Group, 1997).

Furthermore, private sector institutions compare countries and business environments. The Swiss IMD

for instance reports on 49 nations’ performance based on 300 criteria within four sub-categories;

economic performance, government efficiency, business efficiency and infrastructure (IMD 2002).

Another example is the KPMG report on business costs in North America, Europe and Japan, finding

that Canada is the most attractive country for business investment. The national scores are an

average of costs in comparable cities in each country (KPMG 2002). At local level, the cities-of-

tomorrow network addresses quality of life indicators in local government (Bertelsmann Foundation

2001). Again, supply and demand need to be focussed for what the external focus is concerned.

6. Performance information may differ in the possibility to aggregate and/or to disaggregate

performance information. It may be necessary to split up information in different breakout categories in

order to explain high or low performance. Breakout categories might be organisational units, customer

characteristics, geographical location, difficulty of workload, or type and amount of service (Hatry

1999). However, it may be equally important to be able to consolidate performance information for

several reasons. First, it should be noted that highly broken up information involves a possible

vulnerability. The lower the level of analysis with more focused and detailed information (frog’s view),

12

the higher the illusion of better control (no helicopter view on important societal matters). Therefore, to

get the whole picture, information needs to be consolidated. Next, it is increasingly acknowledged that

the public sector adds to the national income. This implies that the zero-hypothesis on public sector

productivity is abandoned. (Dowrick and Quiggin, 1998). The shift from the zero-hypothesis to the non

zero-hypothesis requires an increased attention for the consolidation of performance information to a

government-wide level.

How wide should the lens be to get the whole picture? There are different levels and different angles

for consolidation of performance information. Consolidation may occur from agencies to holdings, but

also within policy fields and from service and production units to service chains. At the highest level,

performance information enlightens performance of governance, i.e. the joint capacity of government

together with other societal actors to give direction to society. However, performance of governance is

more than the separate performance of hierarchies, markets and networks. Performance of

governance is not just the sum of its components. Therefore it is important not only to look at

performance of a single network, a single hierarchy or a single market mechanism, but at the

performance of hierarchies, networks and markets working together throughout the different steps in

policy cycle (Peters, 1998). The level and the degree of detail of performance information need to be

agreed upon by suppliers and demanding actors.

7. Performance information may differ in the degree of systemisation. Does the performance

information fit within a systemic or standardised model or not? Some countries develop national

models such as the Planning, Reporting and Accountability Structure (PRAS) in Canada (1997), the

Government Performance and Results Act (GPRA) in the United States (1993), the Financial

Management and Accountability act in Australia (1997) or the ‘rapport d’activité annuel’ in France

(2000) (Bouckaert, Hoet & Ulens, 2000). Furthermore, academic and private sector organisations

propose international generic models. Note that generic models such as the ISO standards, the

Balanced Scorecard (BSC), the European Foundation for Quality Management (EFQM) models, the

Common Assessment Framework (CAF) and the Public Sector Excellence Model (PSEM) gradually

expand their scope by shifting from an input and process focus towards the inclusion of an output and

13

effectiveness focus. The supplied performance information should correspond with the demanded

extent of systematisation or standardisation.

Input

Activities

Output

Effects

ISO 9000 BSC EFQM CAF PSEM

Figure 1: Evolution in generic models of performance measurement systems (Bouckaert and Auwers, 1999b)

8. Performance information may have a scope on quality or quantity of public service. In general,

measurement systems have a tendency towards more quantitative and tangible aspects of service

delivery. Especially information on quality is hard to relate to input information. Nonetheless,

information on quality improvements should always include price/quality. Indeed, quality always has its

price. Improvement in quality consequently is a matter of Willingness To Pay (WTP) (See figure 2).

What tariffs or taxes do citizens want to pay for a quality improvement? A focus that is constrained to

the Y-axis is flawed because it excludes the societal choice that has to be made through the political

process. Supply and demand need to correspond with regard to the measurement of quality or

quantity of service delivery.

Quality

Price

Y

XWTP

Figure 2: Willingness To Pay for service quality

14

9. Performance information may be or not be contrasted with standards. As pointed out before, the

performance measurement process comprises two major steps. First, there is the measurement itself:

“how much…?”, “how fast…?”, “how high…?”, etcetera. Secondly, there is the analysis the data.

Often, analysis is done by confronting the data with a standard. Possible standards are self-

assessments, comparisons with oneself through time or comparison with others leading to frontiers,

benchmarking and the identification of best practices. (Spendolini 1994; Karlöf and Östblom 1995;

Liner, Hatry et al. 2001) Thus, performance information might differ in the standard setting used for

analysing the data. Does the supplied standard setting correspond with demand?

An important issue with regard to standard setting is that in a usual statistical distribution not every

organisation can be a best practice. Thus, the argument turns to the bottom-line practice. How big

does the society allow the space between bottom and top to be? In other words, what is the

importance of the argument that all citizens should get the same value for their money?

The performance measurement technology: the capacity to supply information

In the previous section, we looked upon performance information as a multi-faceted product. The next

issue that comes to the fore is whether the measurement process is able to provide performance

information in its various facets. Some aspects of performance measurement capacity are taken in to

account beneath.

1. Processing techniques and benchmarking techniques. Arguably, progress in both the processing

techniques and capacities has increased the measurement capacity. For example, frontier analyses

like Data Envelopment Analysis (Lovell, Walters and Wood 1990; Charnes, Cooper et al. 1994;) and

Free Disposal Hull (Tulkens H., 1990) have been applied on public sector services on several

occasions. DEA has been used to compare public sector offices such as fire services, local civil

registry offices, hospitals, schools, prisons and courts (e.g. Bouckaert 1992; Bouckaert 1993; De

Borger, Kerstens et al. 1994; Blank 2000). Nonetheless, these techniques have still to be

disseminated from the academic area to performance measurement systems used by public sector

organisations. The use of processing techniques requires adequate processing capacities.

Undoubtedly, processing capacities have increased for what the hard- and software is concerned.

15

Presumably, there is a positive impact of IT on performance. Lee and Perry (2002) show a positive

relation between the Gross State Product of American State governments and their IT investments.

However, many issues remain. What is the impact of ICT, e-government and data warehousing on

organisations and vice versa. What is the impact of ICT on performance measurement? Is the

development of software for performance measurement and reporting enhancing the use of more

sophisticated processing techniques? What are the benefits and costs of the integration of ICT in the

performance measurement systems (Brown 2001; Cloete 2001)? Indicators on quality require specific

processing techniques with more room for interpretation. Do processing techniques and capacities

allow assessing and integrating of qualitative information?

2. An additional topic on the performance measurement capacity comprises learning and improvement

strategies. Performance measurement spreads on different tiers of government, in a wide range of

services and countries. All those levels, countries and service providers yield a good opportunity to

improve and to learn from one another. Strategies to settle in initiatives of other organisations may

enhance the production of performance information. Platforms like e.g. PUMA, OECD have

established initiatives to enhance those learning cycles (OECD, 2001). Learning cycles require a study

of what is significantly in common between countries, levels of government and services and what is

not. However, there are many obstacles in learning from others. Understanding differences in socio-

economic factors, in the political system and in the administrative system together with the

identification of chance events (e.g. scandals and disasters) may point to the particular and the

generic aspects of management reform and thereby explain the viability and constraints of learning

from other countries, other levels of government and providers of other services. (Pollit and Bouckaert,

2000)

Performance information is the output of a reliable performance measurement process, where

processing techniques, benchmarking and learning cycles, among other factors, play an important

role. It is not obvious to design a performance measurement system and operate performance

measurement processes that provide high quality performance information. Bouckaert (1995b)

described thirteen measurement diseases that point to thirteen possible defects in performance

measurement systems. Three diseases are about assumptions and convictions that harm the activity

16

of measuring itself. Four diseases involve the volumes and the numbers perceived. Six concern the

content, position and amount of measures. The result of the diseases is that the performance

measurement technology is affected and therefore that performance information will be of inferior

usefulness. We shortly describe the thirteen diseases.

a. The Pangloss Disease: “We live in the best of all possible worlds.” Therefore, performance information is not needed since

measurement ends up always proving existence of the best practice.

b. The impossibility disease: Performance measurement in the public sector is impossible because of absence of prices, and

excludable customers.

c. Hypochondria: Hypochondria is the feeling that the public sector has to be worse than the private sector. Performance

measurement in the public sector is more difficult than in the private sector. Equally, also government performance will be less

than private sector performance.

d. The convex/concave disease: The measured output is different from the real output. A measure is convex when the

measured output is higher than the real output (e.g. citation cycles in citation indexes) and concave when it is lower (e.g.

registration data of visitors in a hospital).

e. Hypertrophy: A process component balloons only because it is measured. In this case, real output rises. Cost per unit

efficiency measures is an example. The cost often can only be reduced by increasing output. At the end of the day, the overall

budget might increase.

f. Atrophy: A process component deflates because of the measurement. A typical example is a decrease in service quality

when measuring quantities.

g. Mandelbrot disease: Mandelbrot (1977) discussed measuring the length of the coast of Britain and stated that the length is a

function of the detail of the yardstick. Likewise, more data on crime might change people’s perception of crime. More

measurement points lead in people’s perception to additional amounts while reality remains unchanged.

h. Pollution disease: Indicators should be on inputs, processes, outputs and effects. However, indicators on the various

aspects of the system often are mixed up. This leads to pollution of the system and decreases transparency.

i. The inflation disease: The performance measurement system provides an inflated list of indicators. Performance information

may become less intelligible when there are too many measures. A multitude of indicators may encourage a consumerist

“cafeteria style” in which people go shopping with the measurement list.

j. The enlightened top disease: Indicators that are imposed by the top or are external of the organisation are likely to

encounter resistance from within the organisation. This lack of legitimacy might seriously diminish the capacity of the

performance measurement system to provide useful performance information.

k. The time shortening disease: This flaw in the measurement process makes the organisation focus on the short term while

neglecting the long term. This is problematic because the long term does not equal the sum of the different short terms.

l. The Mirage disease: Because of measurement noise, the measurement system shows something different from what we

think we perceive. We see mirages instead of real things.

m. The shifting disease: The indicators comprised in the measurement systems do not correspond with the organisational

goals. This results in a shift away from organisational goals.

17

This list of diseases is not exhaustive. Presumably, many more maladies may be added (see e.g. van

Thiel and Leeuw 2002). For instance, performance measurement systems might suffer from a tunnel

view. There is no flexibility in adding or changing indicators. As a result, the management of the

organisation only deals with the measured activities while neglecting the other. By looking at

production factors and pathologies, a thought-out model of the production technology for providing

performance measurement could be developed.

§4. Why does progress or regress occur?

Organisational and societal guidance, control and evaluation may be based on information or trust. At

one extreme position, guidance, control and evaluation are established only on information. This

would involve (probably too) vast costs. At the other extreme guidance, control and evaluation is

achieved entirely by trust in the guided, controlled and/or evaluated actors. Again, this would be

largely inefficient because government would be unaware of its steering capacity in society. Actually,

guiding, controlling and evaluating then proceed sightless. The search is for the optimal balance

between guidance, control and evaluation based on information and trust. This text is primarily

focussed on a particular information system, i.e. the performance information generated by

performance measurement systems. Hence, the question turns to the motives for seeking or providing

(more) performance information with specific characteristics at the expense of steering based on trust.

This question can be rephrased in economic terms as a search for the benefits and the costs (in the

most broad sense) of supplying and/or demanding performance information.

Both constructionist and functionalist theories may yield plausible hypotheses on why organisations

(do not) search for performance information and why organisations (do not) offer performance

information. Constructivist and functionalist theories can indeed be brought together. Both views have

the capacity of absorbing challenges posed by the other framework. Pollitt (2001; p.483) refers to

Dunleavy’s bureau shaping model as an example of a theory integrating bureaucratic motives far more

complicated than budget maximisation can be explained with a rational choice framework (Dunleavy

1991). Likewise, March and Olson (1989) integrate functionalism in a constructivist framework: ‘Having

determined what action to take by a logic of appropriateness, in our culture we justify the action by a

logic of consequentiality’ (March and Olson 1989: 162).

18

Why providing information?, Why demanding information? : The logic of the consequences.

Why would an actor provide or demand performance information? Using Merton’s framework, supply

and demand of performance information may be explained by the functions and dysfunctions of

providing or supplying information. The consequences of social phenomena may contribute to the

goals of a system (functional) or not (dysfunctional) (Merton [1949] 2002). Thus, performance

information may be functional for some subsystems and dysfunctional for other. The benefits of

performance information are the manifest and latent functions. The costs are the dysfunctions. The

same applies for not providing or supplying performance information. Not measuring performance may

be functional or dysfunctional and thus have its subsequent benefits and costs. Note that within the

functionalist paradigm, consequences are causes. Table 3 reflects the different positions. Merton as

well distinguishes between manifest and latent functions and dysfunctions. Manifest (dys)functions

refer to ‘those objective consequences for a specified unit (person, subgroup, social or cultural

system) which contributes to its adjustment or adaptation and were so intended; the second (latent

functions) refer to unintended and unrecognised consequences of the same order (Merton [1949]

2002: p.398).’ We hypothesize that latent functions and dysfunction may equally influence supply and

demand curves.

Functional Dysfunctional Supply of and demand for performance information

Position A Position B

Not supplying or demanding performance information

Position C Position D

Table 3: performance measurement in a functionalist framework.

The stance of the existing mainstream new public management literature (NPM) is a manifest

functionalist one (Position A). Often indirect proof is given for the functionality of performance

measurement. It is proven that not measuring performance is dysfunctional (Position D), (e.g. bad

decision-making, disputable service contracts, insufficient accountability due to a lack of performance

information). Therefore, it is argued, performance measurement must be functional (Position A).

Bouckaert and Auwers (1999) demonstrate that performance information is useful, possible and

necessary by refuting the thesis that performance measurement is not useful, not possible and not

necessary. The Audit Commission (2000) motivates performance measurement by telling what

happens if one does not measure results (figure 3).

19

What gets measured, gets done

If you can demonstrate results, you can win public support

If you can't recognise failure, you can't correct it

if you can't see success, you can't learn from it

If you can't rewsard success, you're probably rewarding failure

if you can't see success, you can't reward it

If you don't measure results, you can' tell success from failure

Figure 3: Why measuring performance? (reductio ad absurdum proof) (Audit Commission 2000: p.6 based on Osborne and Gaebler 1992)

Thus, the thesis is that more performance information is to an altering extent functional for

management and governance and therefore it progresses or regresses. Indeed, performance

information is seen as pivotal for new public management techniques (OECD 1997). Hood (1991)

distinguished seven main points in NPM: (1) letting the managers manage, (2) a focus on explicit

standards and measures of performance, (3) better output controls, (4) breaking up of the public

sector in corporatised units around products, (5) contracts and public tendering procedures, (6) a

stress on private sector management styles with flexibility in hiring and rewarding and (7) a greater

discipline and parsimony in resource use. At least four (1-3,6) and probably all the seven facets of

NPM require substantial performance information. Consequently, an increasing supply and demand

for performance information may be explained by this functionality. However, as the history of

performance measurement showed, NPM was not the first time performance information was

functional for public administration. PPBS and MBO for example required a great deal of performance

information, probably more than could be provided (Wildawsky 2000).

The main thesis is that performance information is functional for effective government (Position A).

Often this position is motivated by the dysfunctionality of the lack of performance information (Position

D). Some researchers however proved that not measuring performance might be functional as well

and that measuring performance might be dysfunctional too (Positions B and C). Sharkansky (1975)

showed that input-budgets often were more advantageous in times of tight budgets because they

reduce the potential for political conflict compared to performance based budgets. Not measuring

20

performance in this case was favourable for the speed of decision-making (position C). Furthermore,

Dutch research showed that having information was dysfunctional for the managers of large technical-

complex projects (Otten, 1996) (Position B). Critical information of the project development caused

managers to avoid the problem rather than solving the problem. This poses questions towards the

manageability of public services based on performance information. Likewise, Halachmi (1996) warns

for potential dysfunctions of quality awards - a management tool requiring substantial performance

information. Potential dysfunctions are the internal turmoil resulting from explicating goals, the risk of a

short-term focus and a decreased capacity to spot trends that are not included in the projected quality

model. To conclude, various research results prove that the consequences may provide an appealing

explanation for demanding or supplying performance information. Finally, Heinrich (1999) showed that

measurement of cost per placement in job training programs had negative implications for job quality.

The availability of performance data was in this case dysfunctional for the organisational goals.

Why providing information?, Why demanding information?: The logic of appropriateness.

Not only functionalist theories might be suitable to explain the supply and demand for information.

Constructionist theories may equally reveal factors influencing the supply and demand curves.

Organisational behaviour in this view should be explained by the appropriateness of human action

within a social constructed reality (Berger and Luckmann 1966). An interesting approach is Dimaggio

and Powell’s (1983) institutional isomorphism.

Isomorphism is the process of homogenising in which organisations within a field attain more and

more resemblance. Dimaggio and Powell distinguish between competitive isomorphism and

institutional isomorphism. Competitive isomorphism will occur in open market situations with free

competition. Due to processes of ‘natural selection’, a limited set of viable organisational forms will

prevail. Instead of studying processes of selection, Dimaggio and Powell focus at processes of

adaptation to the environment. Choices of managers are mostly based on ‘taken for granted

assumptions’ (Dimaggio and Powell 1983: 149). Most of the public sector organisations however are

not subjected to open market competition. They are subject to institutional isomorphism. Non-profit

organisations conform to the normative demands and expectations of their environment. Three

mechanisms cause institutional isomorphism: coercive, mimetic and normative isomorphism. Coercive

21

isomorphism is the formal and informal pressure on and from organisations. They are more or less

rational adaptation processes enticed by for instance subsidy requirements. A second category is

mimetic isomorphism. Organisations that doubt their own functioning imitate organisation that they

perceive to be more effective or legitimate. These imitation processes may occur either intentional or

not. Thirdly, normative isomorphism refers to the shared norms of organisation members. The

increased professionalism of the public sector leads to a professional elite with a limited number of

norms. There is an ‘esprit des corps’ that reduces variety in organisational behaviour. These

processes enable inefficient organisations to survive and flourish which cannot be explained by the

natural selection or rational adaptation.

With regard to the reasons for supplying or demanding performance information, this theory would

yield different hypotheses on why organisations supply or demand performance information.

Underneath, some exemplary hypothesises are derived from the different isomorphisms.

A potential hypothesis for coercive isomorphism might be: an organisation demands performance information because it is

stated in legislation. For example, due to the Government Performance and Results Act (GPRA) reform in the US government

agencies are obliged to integrate performance information in their budgets. Likewise, more and more subsidy regulations for

local government in Flanders require performance-based policy planning (Bouckaert and van Dooren 2000). Consequently, local

authorities increasingly demand and produce performance information.

A potential hypothesis for mimetic isomorphism might be: an organisation demands performance information because

comparable organisations (that are considered to be effective by politicians and/or the public) do the same. Halachmi (1996) for

instance refers to the participation in quality awards as a reflection of institutional awareness that quality is important for the

organisation.

A potential hypothesis for normative isomorphism might be: an organization demands performance information because the

majority of the staff has an economics or public administration degree. By contrast, it may be hypothesized that organisations

where the most widely held degrees are in law, demand less performance information. Wilson (1989: p59-65) for instance points

to several examples in which professional norms shape organisational behaviour. The Federal Trade Commission e.g. has two

professions: economist and lawyers. With regard to anti-trust issues, a lawyer will pursue a firm that violates the law while an

economist will target the infringements that influence the market prices and thus consumer welfare.

Lammers, Mijs and Van Noort (1997) point to the weaknesses in this theoretical framework.

Institutionalisation processes only come from outside the organisation. There is no reciprocal relation

22

of the organisation and its environment. The environment determines the organisation. This leads to a

focus on macro structures and processes, neglecting actors pursuing their interests. There is no room

for power and an actor perspective. Nonetheless, power and conflict may be important for explaining

supply and demand curves. Thus, constructionist theories may enhance the explanation of supply and

demand of performance information, but need a substantial theoretical input from structural

functionalist school.

We briefly discussed some possible theories that might yield meaningful hypotheses on the motives

underlying the supply and demand curves for performance information. However, the span of an

article inevitably limits the theoretical exploration. We only referred to a small sample of theories that

might explain supply and demand. Other useful theories may be for instance rational choice

approaches such as Williamson’s interaction costs theories (Williamson 1975; Williamson 1985) and

principal agent theories (Alchian and Demsetz 1972; Jensen and Meckling 1976). In addition, other

neo-institutionalist theories such as Scott and Meyer’s decoupling of technical and institutional

environment (Scott and Meyer 1994) may produce useful explanations.

Conclusion

The ambition of this text was to develop an analytical framework for assessing progress or regress in

performance measurement. Therefore, we rephrased the subject in economic terms. We started from

the assumption that genuine progress or regress only occurs when there is a better correspondence

between supply and demand of performance information. Only then, performance information will be

used. Next, different dimensions of performance information were discussed. A better match of supply

and demand of performance information can only be explored when we have a better understanding

of the different characteristics of performance information. A related issue deals with the technology

for providing performance information. A sound production process for performance measurement is a

prerequisite for providing performance information with particular characteristics. Subsequent to the

issue related to the nature of performance information and the production technology, we explored

some theoretical frameworks that might explain the offer and demand curves. The research questions

turn to why organisations supply and/or demand performance information. Theories from both the

constructionist as structural-functionalist tradition might explain offer and demand. Tentative proof for

this thesis is given by deriving some hypotheses on the motives for supply and demand for

23

performance information from the functionalist theory of R.K. Merton and the constructionist theory of

Dimaggio and Powell. However, presumably none of the theories will provide an encompassing

framework. Therefore, the next step will be a search for a combined framework adding up or

integrating constructionist and functionalist thinking.

REFERENCES

Alchian, A.A. and Demsetz, H (1972). ‘Production, information cost, and economic organization’, American Economic

Review. 62:777-795.

Algemene Rekenkamer (1997). Informatievoorziening en Kengetallen (information supply and indicators), Den Haag:

Algemene Rekenkamer.

Allison G.T. (1980) ‘Public and Private Management: Are They Fundamentally Alike in All Unimportant Respects?’ In:

Shafritz, J.M. and Hyde A.C. (1997). Classics of Public Administration (fourth edition). Forth Worth: Harcourt Brace College

Publishers.

Ammons, D. N., (1996). Municipal Benchmarks: assessing local performance and establishing community standards.

Thousand Oaks: Sage.

Audit Commission (2000). Aiming to improve: the principles of performance measurement. London: Audit commission.

Audit Commission (2001). Annual Report. London: Audit Commission.

Barzelay, M. (1992) Breaking through Bureaucracy: a New Vision for Managing in Government. Berkeley: University of

California Press.

Bennet, C., Lenihan, D.G., Williams, J. and Young, W. (2001) Measuring Quality of Life: The Use of Societal Outcome by

Parliamentarians. Office of the Auditor General of Canada.

Berger, P.L. and Luckmann, T. (1966). The Social Construction of Reality. New York: Doubleday and company.

Bertelsmann foundation (2001). www.cities-of-tomorrow.net (accessed December 2001)

Blank, J. (2000). Public Provision and Performance: Contributions from Efficiency and Productivity Measurement.

Amsterdam: Elsevier.

Bouckaert, G. & Auwers, T. (1999a). De modernisering van de Vlaamse Overheid (the modernisation of the Flemish

government). Brugge: Die Keure

Bouckaert, G. (1992), Productivity analysis in the Public Sector: the case of Fire service. International Review of

Administrative Sciences. 58 (2).

Bouckaert, G. (1993), Efficiency measurement from a management perspective: a case of the civil registry office in

Flanders. International Review of Administrative Sciences. 59 (1).

Bouckaert, G. and van Dooren, W. (2000). Subsidiestromen van de Vlaamse gemeenschap naar de gemeenten en

OCMWs. (Subsidy flows from the Flemish Government to local authorities and public centres for social welfare). Leuven:

unpublished report.

Bouckaert, G., & Auwers, T. (1999b). Prestaties Meten in de Overheid (performance measurement in the public sector).

Brugge: Die Keure.

24

Bouckaert, G., Hoet, D., & Ulens, W. (2000). Prestatiemeetsystemen in de Overheid: een internationale vergelijking

(performance measurement systems in the public sector: an international comparison). Brugge: Die Keure.

Bouckaert, G., The history of the productivity movement, in: Halachmi, A., & Holzer, M. (ed.) (1995a). Competent

Government: Theory and Practice. The best of public productivity review, 1985-1993. Burke, VA.: Chatelaine Press, pp.

361-398.

Bouckaert, G., Improving Performance Measurement, in: Halachmi, A., Bouckaert, G. (ed.) (1995b). The Enduring

Challenges of Public Administration. San Francisco: Jossey-Bass, pp.379-412.

Caiden, N. (1998). ‘A New Generation of Budget Reform.’ In: Peters, B.G. and Savoie, D. Taking Stock: Assessing Public

Sector Reforms. Montreal & Kingston: McGill-Queen’s university press and CCMD.

Carr-Hill, R., Lintott, J., Bowen, J., & Hopkins, M. Societal Outcome Measurement: the European Dimension, in: Smith, P.

(1996). Measuring Outcome in the Public Sector. London: Taylor & Francis Ltd, pp.174-194.

Charnes, A., Cooper, W., Lewin, A.Y., & Seiford, L.M. (1994). Data Envelopment Analysis: Theory, Methodology and

Applications. Boston: Kluwer Academic Publishers.

Chaty, L. (1999), La ‘responsabilisation’ et le contrat managérial: figures et outils de la performance administrative en

Europe. (The ‘repsonsabilisation’ and the management contract: figures and instruments of administrative performance in

Europe) Politiques et Management Public. 17 (2), p.87-92.

Cloete, F. (2001). Improving good governance with electronic policy management assessment tools. Paper presented at

the Public Futures 2nd annual performance in government conference, London, September 2001.

De Borger, B., Kerstens, K., Moesen, W., and Vanneste, J. (1994). Explaining differences in productive efficiency: an

application to Belgian Municipalities. Public Choice. 80 (3-4).

Dimaggio, P.J. and Powell, W.W. (1983). The Iron Cage Revisited: Institutional Isomorphism and Collective Rationality in

Organisational Fields’ American Sociological Review. 48:2 pp.147-160.

Donaldson L. (1985). In defence of Organization Theory: a defence to the critics. Cambridge: Cambridge University Press.

Dowrick, S., and Quiggin, J., Measures of Economic Welfare: The Uses and Abuses of GDP, in: Eckersley, R., (ed.)

(1998). Measuring Progress: Is Life Getting Better?. Collingwood, VIC: CSIRO publishing, pp. 93-121.

Dunleavy, P. (1991). Democracy, bureaucracy and public choice: economic explanations in political science. New York:

Harvester Wheatsheaf.

Eckersley, R., (ed.) (1998). Measuring Progress: Is Life Getting Better?. Collingwood, VIC: CSIRO publishing.

Fisher, R. (1994). An Overview of Performance Measurement. Public Management, 76:1, pp.2-9.

Frederickson, H.G. (1971) ‘Toward a New Public Administration’ in Marini, F.E. Toward an New Public Administration: The

Minnowbrook Perspective. New York: Chandler.

Fuchs, W. et al. (1988). Lexikon zur Soziologie. Opladen: Westdeutscher Verlag.

Goñi, E.Z. (1991). Financial Management Innovations in Europe. Maastricht: EIPA.

Goodnow, F. (1900). ‘Politics and Administration’ In: Shafritz, J.M. and Hyde A.C. (1997). Classics of Public Administration

(fourth edition). Forth Worth: Harcourt Brace College Publishers.

Guyomarch, A. (1999) ‘‘Public Service’, ‘Public Management’ and the ‘Modernization’ of French Public Administration.’

Public Administration. 77:1. pp171-193.

25

Halachmi, A. (1996) ‘Potential Dysfunctions of Quality Awards Competitions’ in: Hill, H., Klages, H. and Löffler, E. Quality,

Innovation and Measurement in the Public Sector. Frankfurt am Main: Peter Lang.

Hatry, H., (1992). How effective are your community services? Procedures for measuring their quality. Washington D.C.:

The Urban Institute.

Hatry, H., (1999). Performance measurement: getting results. Washington D.C.: the Urban Institute Press.

Hendriks, F. and Tops, P. (1999). ‘Between Democracy and Efficiency: Trends in Local Government Reform in the

Netherlands and Germany.’ Public Administration. 77:1. pp. 133-153.

Heinrich, C.J. (1999). ‘Do Government Bureaucrats Make Effective Use of Performance Management Information?’ Journal

of Public Administration Research and Theory. 9:3 pp.363-393

Hill, H., & Klages, H. (ed.), (1996a). Controlling im neuen steueringsmodel: werkstattberichte zur einführung von controlling

(Controlling in the ‘neue steuerung’ model: writings on the implementation of controlling). Düsseldorf: Raabe.

Hill, H., & Klages, H. (ed.), (1996b). Wege in die Neue Steuerung (Directions in the ‘Neue Steuerung). Düsseldorf: Raabe.

Holzer, M. (ed.) (1976). Productivity in Public Organizations. Port Washington, N.Y.: Kennikat Press.

Hood, C. (1991). ‘A Public Management for All Seasons’. Public Administration Review 69:1 pp.3-19.

Hoover Commission (1949). The Hoover Commission Report on Organization of the Executive Branch of Government.

New York: McGraw-Hill Book co.

IMD (2002). www.imd.ch (accessed March 2002)

Jensen, M.C. and Meckling, W.H. (1976). ‘Theory of the firm, managerial behaviour, agency cost and ownership structure’.

Journal of Financial Economics. 4:1 pp305-360.

Janssens, G., & Maessen, F. (1996). Doelmatigheidskengetallen bij de Rijksoverheid: een Tussenbalans. Op weg naar

Transparantie en Sturen op Resultaat (effectiveness indicators in central government: a status quaestiones. Towards

transparency and steering on results). Beleidsanalyse, (3).

Karlöf, B. and Östblom, S. (1995). Benchmarking: a Signpost to Excellence in Quality and Productivity. Chichester; John

Wiley and Sons.

Klages, H., (1999). Verwaltungsmodernisierung: “harte” und “weiche” aspekte (Management reform: Hard and Soft

aspects). Speyer: Forschungsberichte.

KPMG (2002). Competitive Alternatives. Comparing Business Costs in North America, Europe and Japan. KPMG.

Lammers, C.J., Mijs, A.A, and Van Noort, W.J. (1997). Organisaties vergelijkenderwijs: Ontwikkeling en relevantie van het

sociologische denken over organisaties. (Comparing organisations: development and relevance of the sociological thinking

about organisations). Utrecht: Het Spectrum.

Lee, G. and Perry, J.L. (2002). ‘Are Computers Boosting Productivity? A Test of the Paradox in State Governments.

Journal of Public Administration Research and Theory. 12:1 77-102.

Liner, B., Hatry, H.P., Vinson, E., Allen, R., Dusenbury, P., Bryant, S. and Snell, R. (2001). Making Results-Based State

Government Work. Washington D.C.: The Urban Institute Press.

Lovell, C.A.K., Walters, L.C., & Wood, L.L. (1990). Stratified Models of Education Production Using DEA and Regression

Analysis. Chapel Hill: University of North Carolina, Department of Economics.

Mandelbrot, B. (1977). Fractals, form, Chance and Dimension. San Francisco: Freeman.

March, J.G. and Olsen, J.P. (1989). Rediscovering institutions: the organizational basis of politics. New York: Free Press.

26

Merton, R.K. (1949). Latent and Manifest functions. In: Calhoun, C., Gerteis, J., Moody, J., Pfaff, S., Schmidt, K., and Virk,

I. (2002). Classical Sociological Theory. Oxford: Blackwell.

Moe, R.C. (1987) ‘Exploring the Limits of Privatisation’ In: Shafritz, J.M. and Hyde A.C. (1997). Classics of Public

Administration (fourth edition). Forth Worth: Harcourt Brace College Publishers.

Moniolle, C., (1999). Les centres de responsabilité: bilan et perspectives (The ‘centres de responsabilité’: balance and

perspectives). La Revue du Trésor. 79 (7), p. 432-440.

Morley, E., Bryant, S.P., & Hatry, P.H. (2001), Comparative performance measurement. Washington D.C.: the Urban

Institute Press.

Mosher, F. (1968). Democracy and the Public Service. New York: Oxford University Press.

Mosher, F.C. et al. (1974). ‘Watergate: Implications for Responsible Government.’ In: Shafritz, J.M. and Hyde A.C. (1997).

Classics of Public Administration (fourth edition). Forth Worth: Harcourt Brace College Publishers.

Naschold, F. (1996). New Frontiers in Public Sector Management: Trends and Issues in State and Local Government in

Europe. Berlin: Walter De Gruyter.

National Performance Review (1993). From Red Tape to Results: Creating a Government That Works Better and Costs

Less. Washington D.C.: US Government Printing Office.

OECD. (1997). In Search for Results: performance management practices. Paris: OECD.

Office of Management and the Budget (2001). The Presidents Management Agenda: Fiscal Year 2002. Washington D.C.:

OMB.

Otten, M., ‘Ontspoorde Technisch complexe projecten. (Derailed Technical Complex Projects)’ In: De Bruijn, J., De Jong,

P., Korsten, A. and Van Zanten, W. (red.) (1996). Grote Projecten, Besluitvorming en Management. Alphen a/d Rijn:

Samson H.D. Tjeenk Willink.

Perry, J., & Kraemer, K. (1983). Public Management: Public and Private Perspectives. Mountain view, Calif.: Mayfield.

Peters, B.G. (1998). Bringing the State Back in – again. Ottawa: CCMD.

Pollit, C. (2001). ‘Clarifying Convergence: Striking similarities and durable differences in public management reform.’ Public

Management Review. 4:1 pp471-492.

Pollit, C. and Bouckaert, G. (2000), Public Management Reform: a comparative analysis. Oxford: Oxford University Press.

Pollit, C. and Harrison, S. (eds) (1994). Handbook of Public Services Management. Oxford: Blackwell.

President of the Treasury Board (2001). Canada’s Performance 2001: Annual Report to the Parliament. Ottawa: President

of the Treasury Board.

Rainy, H.G. (1998). ‘Assessing Past and Current Personnel Reforms.’ In: Peters, B.G. and Savoie, D. Taking Stock:

Assessing Public Sector Reforms. Montreal & Kingston: McGill-Queen’s university press and CCMD.

Schick, A. (1966). ‘The Road to PPB: The Stages of Budget Reform.’ Public Administration Review. 26:4. pp.243-258

Scott, W.R. and Meyer, J.W. (1994). Institutional Environments and Organisations. Structural complexity and Individualism.

Thousand Oaks: Sage.

Sharkansky, I. (1975). Public Administration: Policy Making in Government Agencies. Chicago: Rand McNally.

Smith, P., A framework for Analysing the Measurement of Outcome, in: Smith, P. (1996). Measuring Outcome in the Public

Sector. London: Taylor & Francis Ltd, pp.1-19.

27

Sorber, B. Performance Measurement in the Central Government Departments of the Netherlands. In: Halachmi, A. (ed.)

(1999). Performance & Quality Measurement in Government: Issues and Experiences. Burke, VA.: Chatelaine Press.

Spendolini, M.J. (1994). The Benchmarking Book. New York: AMACOM.

Swindell, D., & Kelly, J.M., Linking Citizen Satisfaction Data to Performance Measures: a Preliminary Evaluation, Public

Performance & Management Review, 24, pp. 30-52.

Tulkens, H. (1990). Non-Parametric Efficiency Analysis in Four Service Activities: Retail Banking, Municipalities, Courts

and Urban Transit. Paper presented at the Third Franco American Seminar on Productivity Issues in Services at the Micro

Level held at the National Bureau of Economic Research, Cambridge, Mass. July 23-26, 1990.

Tweede Kamer der Staten Generaal (1998-1999). Van beleidsbegroting tot beleidsverantwoording (from policy budget to

policy audit). 26573, nr.2

van Thiel, S. and Leeuw, F.L. (2002). ‘The Performance Paradox in the Public Sector.’ Public Performance and

Management Review. 25:3 pp267-281.

Wildawsky, A. (1969). ‘Rescuing Policy Analysis from PPBS.’ Public Administration Review. 29:189-202.

Wildawsky, A. (2000). Speaking Truth to Power: The Art and the Craft of Policy Analysis. New Brunswick: Transaction

Publishers.

Williams, D.W. (2000). ‘Reinventing the Proverbs of Government’. Public Administration Review. 60:6. pp.522-534

Williamson, O.E. (1975). ‘Markets and Hierarchies: analysis and antitrust implications. London: The Free Press.

Williamson, O.E. (1985). The Economic Institutions of Capitalism: Firms, Markets, Relational Contracting. London: The

Free Press.

Wilson, J.Q. (1989). Bureaucracy: What Government Agencies Do and Why They Do It. Basicbooks.

Wilson, W. (1887). ‘The Study of Administration.’ In: Shafritz, J.M. and Hyde A.C. (1997). Classics of Public Administration

(fourth edition). Forth Worth: Harcourt Brace College Publishers.

World Bank Group (1997). World Development Report: the State in a Changing World. Washington D.C.: the World Bank

Group.

Zifcak, S. (1994). New Managerialism: Administrative Reform in Whitehall and Canberra. Buckingham: Open University

Press.

28