How Not to Measure Innovation

22
Institute of Technology and Regional Policy - InTeReg How not to compare innovation performance A critical assessment of the European Innovation Scoreboard Conference paper for the 2 nd PRIME Indicators Conference on STI Indicators for Policy. Addressing New Demands of Stakeholders Andreas Schibany Joanneum Research, Institute of Technology and Regional Policy Sensengasse1, 1090 Vienna, Austria e-mail: [email protected] tel: +43-1-581 75 20/2823 Gerhard Streicher Joanneum Research, Institute of Technology and Regional Policy Sensengasse 1, 1090 Vienna, Austria e-mail: [email protected] tel: +43-1-581 75 20/2814 May, 2008 Abstract: For quite some time now, European countries have felt the need to carry out comparisons between their respective performances through the application of benchmarking and scoreboard tools, the findings of which are typically processed into country rankings, which in turn guarantee a large degree of publicity in the media and the interested public. Within the scope of such activities, the European Innovation Scoreboard (EIS) has, over the past years, established itself as proba- bly the most attention-grabbing benchmarking tool in the discussion of European technology policy. In this paper we tackle some issues of the current status and discuss some of the strengths and weaknesses of the EIS-indicators. We show that there is a process of convergence in innovation performance in Europe but that the possibilities of short- term influences by policy on the variables measured by the EIS are rather limited.

description

Innovation performance measurement

Transcript of How Not to Measure Innovation

Page 1: How Not to Measure Innovation

Institute of Technology and Regional Policy - InTeReg

How not to compare innovation performance A critical assessment of the European Innovation Scoreboard

Conference paper for the 2nd PRIME Indicators Conference on

STI Indicators for Policy. Addressing New Demands of Stakeholders

Andreas Schibany Joanneum Research, Institute of Technology and Regional Policy

Sensengasse1, 1090 Vienna, Austria e-mail: [email protected]

tel: +43-1-581 75 20/2823

Gerhard Streicher Joanneum Research, Institute of Technology and Regional Policy

Sensengasse 1, 1090 Vienna, Austria e-mail: [email protected]

tel: +43-1-581 75 20/2814

May, 2008

Abstract:

For quite some time now, European countries have felt the need to carry out comparisons between their respective performances through the application of benchmarking and scoreboard tools, the findings of which are typically processed into country rankings, which in turn guarantee a large degree of publicity in the media and the interested public. Within the scope of such activities, the European Innovation Scoreboard (EIS) has, over the past years, established itself as proba-bly the most attention-grabbing benchmarking tool in the discussion of European technology policy. In this paper we tackle some issues of the current status and discuss some of the strengths and weaknesses of the EIS-indicators. We show that there is a process of convergence in innovation performance in Europe but that the possibilities of short-term influences by policy on the variables measured by the EIS are rather limited.

Page 2: How Not to Measure Innovation

JOANNEUM RESEARCH – Institute of Technology and Regional Policy 2

1. INTRODUCTION

The European Innovation Scoreboard (EIS) responds to an explicit request by the Lisbon European Council. Aware that innovation is a key factor in determining productivity growth and that the EU wants by 2010 to become the most competitive and dynamic knowledge-based economy in the world, instruments were needed in order to measure the evolution towards this target. The EIS provides a comparative assessment of the innovation performance of EU Member States. It uses 25 indicators under five headings relating to innovation performance and compares performance against a European average, by combining data from different data sources (such as the Community Innovation Survey, EUROSTAT and the OECD). And so to make the performance visible and comparable, the EIS aggre-gates all indicators into an overall index – the Summary Innovation Index (SII). So far so good.

Under these crude characteristics the EIS expresses once more the feeling of the need of all European countries to carry out comparisons between their respective performance through the application of benchmarking and scoreboard tools, the findings of which are then typically processed into country rankings. Beside the EIS a number to scoreboards that cover innovation has increased since the 1990s as well. Examples include the Structural Indicators as the main indicators designed to assess progress towards achieving the Lisbon targets; DG Research’s Key figures; DG Enterprise’s Enterprise Policy Scoreboard; the OECD’s Science, Technology and Industry Scoreboard; the World Knowledge Com-petitiveness Index; the European Competitiveness Index; the UK Competitiveness Index by the Centre for International Competitiveness; and, last but not least, the World Economic Forum Competitiveness Index. However, within the scope of such activities, the EIS has, over the past years, established itself as probably the most attention-gabbing benchmarking tool in the discussion of European technology policy. The annually published results guarantee a large degree of publicity from media and the inter-ested public.

It is thus an understandable interest of policy-makers to compare one country with another or with the average of a country sample – there does not exist another way to find the specific strengths and weak-nesses which per definitionem exist only as a relative concept. Indicators and scoreboards are thus crucial for evidence-based policy, resource allocation and accountability requirement. But the inflation of innovation scoreboards and their apparent attractiveness to the media and policy community is often misused for the purpose of an international “beauty contest”. Behind the country rankings very often lies a simple mechanical application of the indicators and their adding up to a total without considering the economic and institutional context. This kind of exercise would result in a simple “indicator-led policy” with simplistic summaries and misguided policy actions (if the EIS would be taken seriously). It was the European Commission who has stated that “indicators are no substitute for in-depth qualita-tive assessment” (European Commission 2000, p.6). It was in the very beginning of all these exercises when the European Commission distanced itself from the questionable beauty contest approaches of some business management institutes.

However, reality, i.e. public perception and the clumsy presentation of the results by the European Commission, looks quite different. The Annual Progress Reports to the European Council (which are based on the structural indicators) contain mainly general statements on European level without much regard to the national level. And in February, when the EIS is published, the European world generally looks mesmerized at one number: the Summary Innovation Index (SII). The public discussion is then constrained to the fear of losing competitiveness (in the case of a decline in the ranking) or to be proud of being in the group of the leaders. No really open and policy-relevant discussion of the results beyond the simple ranking takes place, neither on the European nor the national level. As a policy tool the EIS does not matter (see Schibany et al. 2007)

Page 3: How Not to Measure Innovation

JOANNEUM RESEARCH – Institute of Technology and Regional Policy 3

But why then is the EIS presented as such an important tool? This issue can only be explored by con-sidering the institutional background of the EIS which will be discussed in the first section of the paper. The second section will discuss some aspects of the selected indicators followed by a broad discussion of the Summary Innovation Index. The policy-relevant conclusions of the EIS will be discussed at the end of the paper. Although the following analyses and explanations are directed towards a general assessment of the EIS the position of Austria plays a quite prominent role. This is for two reasons: (i) the authors are interested in the Austrian performance and (ii) the paper is based on a study for the Austrian ministries aimed to find an adequate framework of interpretation of the EIS so that policy makers can respond to future innovation scoreboards with a larger degree of composure (Schibany et al. 2007).

2. THE INSTITUTIONAL BACKGROUND OF THE EIS

It was at the summit in Lisbon in March 2000 when the EU Heads of State and Government not only initiated the “Lisbon Process” but presented a new method of intervention. This new instrument was described as a “new open method of coordination” (OMC) and was aimed at completing or reinforcing Community action, in particular when there is little scope for a legislative solution (Régent 2002). The EIS was part of the OMC and should strengthen the logic of ‘mutual learning, benchmarking, best practice and peer pressure to achieve objectives’ (Régent 2002).

The main elements of the process are (Giering and Metz, 2004):

• First, fixing guidelines combined with specific timetables for achieving the goals which they set in the short, medium and long terms,

• Second, establishing quantitative and qualitative indicators and benchmarks, • Third, translating these guidelines into national and regional policies and • Fourth, monitoring and evaluating the process.

The OMC thus represents a new regulatory method which insists on the non-compulsory character of rules, their flexibility and openness, their decentralized character and the plurality of actors involved. This, however, contrasts with the main characteristics of the traditional ‘community method’ (Goet-schy, 2003). When there is little scope for legislative solutions the OMC thus represents a form of su-pranational governance by mean of soft regulation (Régent 2002). Although this method arose already before the Lisbon summit in the field of employment policy it found its way into other policy areas and was - following the specifications set up at Lisbon - first applied to the fight against poverty and social exclusion, as well as the question of the future of pension schemes and last but not least innovation.

From a critical point of view this new approach implies some risks as well as benefits in terms of im-proving the EU’s ability to solve problems (Giering and Metz 2004). One of the main effects lies in the assistance of the member states to overcome some national resistance to structural reforms – research and development as well as innovation is now on the top of the political agenda of all member states. Governments will thus be put under pressure, because OMC creates comparability among member states, and public interest will be aroused by the mechanism of ‘naming, blaming and shaming’ (Gier-ing and Metz 2004). For this kind of construction, the OMC enables (and requires) an agreement on quantitative goals and enhances the participation of the member states, because there are no mecha-nisms of sanctions for those who do not meet the goals. This leads to the main problem with this kind of approach: what is the learning effect of this kind of comparability among member states? As Grupp et al. (2004) have stated:

Page 4: How Not to Measure Innovation

JOANNEUM RESEARCH – Institute of Technology and Regional Policy 4

“As national systems of innovation differ from each other, good policy making in one country may be poor policy making in another one. By relying on composite indicators the structure and the ‘revealed’ comparative advantages of the countries compared remain hidden; the underlying detailed information cannot serve as ‘eye-openers’ to the policy observer” (Grupp et al. 2004, p. 1382).

3. THE INDICATORS OF THE EIS

The first version of the EIS was published in 2000; it used 16 indicators and covered 17 countries. Since then, it has been published annually, with changes both in the number of indicators and countries covered.

The present version, the EIS 2007, includes innovation indicators and trend analysis for the EU27 member states as well as for Croatia, Turkey, Iceland, Norway, Switzerland, Japan, the US, Australia, Canada and Israel (Innometrics 2008).

The innovation indicators are assigned to two main categories (input and output) and into five sub-categories to capture key dimensions of the innovation process. In 2007, 15 indicators of innovation inputs were divided across the following three dimensions (see Table 1):

1. Innovation drivers (5 indicators), which measure the structural conditions required for innova-tion potential;

2. Knowledge creation (4 indicators), which measure the investment in R&D activities, considered as key elements for a successful knowledge-based economy;

3. Innovation & entrepreneurship (6 indicators), which measure the efforts towards innovation at the level of firms.

Ten indicators are used for innovation outputs:

4. Applications (5 indicators), which measure the performance, expressed in terms of labour and business activities, and their value added in innovative sectors;

5. Intellectual property (5 indicators), which measure the achieved results in terms of successful know-how.

As innovation is a multifaceted phenomenon and a non-linear process the EIS-indicators give a good overview of the performance of the countries in some of the relevant fields. It is not restricted to R&D alone as the assessment of innovation capacity, defined as the ability of a system not only to produce new ideas but also to commercialise a flow of innovative technologies over the longer term, requires a range of factors deemed important for effective innovative effort (Veugelers 2007).

Although innovation drives economic performance and has an effect on multi-factor productivity, it is only one factor among other parameters. GDP growth and wealth is first of all influenced by macro-economic variables such as the volume of accumulated capital, the volume of labour input (depending on demographic variables, labour force participation, migration), the macroeconomic stability (interest rates, inflation, savings and investment ratio), the functioning of factor markets (capital, labour), inter-national relations (trade, foreign investment), deregulation and financial stability. All this is outside the scope of the EIS exercise which should be kept in mind when some of the EIS indicators are related to economic output categories.

Page 5: How Not to Measure Innovation

JOANNEUM RESEARCH – Institute of Technology and Regional Policy 5

Table 1: EIS 2007 Indicators

INPUT - Innovation Drivers Datasource1.1 S&E graduates per 1000 population aged 20-29 Eurostat1.2 Population with tertiary education per 100 population aged 25-64 Eurostat, OECD1.3 Broadband penetration rate Eurostat, OECD1.4 Participation in life-long learning per 100 population aged 25-64 Eurostat1.5 Youth education attainment level (% of population aged 20-24 >upper sec.educationEurostat

INPUT - Knowledge creation2.1 Public R&D expenditures (% of GDP) Eurostat, OECD2.2 Business R&D expenditures (% of GDP) Eurostat, OECD2.3 Share of medium-high-tech and high-tech R&D (% of manuf. R&D exp.) Eurostat, OECD2.4 Share of enterprises receiving public funding for innovation Eurostat (CIS4)

INPUT - Innovation & entrepreneurship3.1 SMEs innovating in-house (% of all SMEs) Eurostat (CIS4)3.2 Innovative SMEs co-operating with others Eurostat (CIS4)3.3 Innovation expenditures Eurostat (CIS4)3.4 Early-stage venture capital (% of GDP) Eurostat3.5 ICT expenditures (% of GDP) Eurostat, World Bank3.6 SMEs introduced organisational innovation (% of all SMEs) Eurostat (CIS4)

OUTPUT - Application4.1 Employment in high-tech services (% of total workforce) Eurostat4.2 Exports of high technology products as a share of total exports Eurostat4.3 Sales of new-to-market products (% of total turnover) Eurostat (CIS4)4.4 Sales of new-to-firm products (% of total turnover) Eurostat (CIS4)4.5 Employment in medium-high and high-tech manufacturing (% of total workforce) Eurostat, OECD

OUTPUT - Intellectual property5.1 EPO patents per million population Eurostat, OECD5.2 USPTO patents per million population Eurostat, OECD5.3 Triad patents per million population Eurostat, OECD5.4 Community trademarks per million population Eurostat, OECD5.5 Community industrial designs per million population Eurostat, OECD

Source: EIS 2007

3.2 Critique of the list of indicators

Selection of indicators The indicators used in the EIS span a wide range that covers many aspects of the innovation process. Ultimately, however, this results in an eclecticism (which appears to be almost arbitrary): any concise inference regarding the selection of indicators and, primarily, their mutual interaction is mostly ig-nored. Similarly missing is a hierarchical ranking of indicators: some indicators refer to narrow, clearly defined microeconomic facts (e.g. proportion of subsidised enterprises, proportion of cooperating en-terprises, etc.) while others address structural facts of an entire national economy (population with tertiary education, employment in high-tech, etc.). Thus, indicators should be identified and selected on the basis of a conceptual analysis rather than on a simple statistical correlation analysis (Sajeva et al. 2005).

Short-term versus long-term indicators A main goal of the EIS is to illustrate country-specific trends for each indicator with respect to the EU trend (or any other change in the average of the countries considered). When interpreting these trends one should bear in mind that the EIS indicators are influenced by some other variables, and that they show a very different temporal behaviour:

• Some indicators are very “structural” by nature and typically change only over a long period of time, such as industrial structures or the average education of the population. Hence, strong short-

Page 6: How Not to Measure Innovation

JOANNEUM RESEARCH – Institute of Technology and Regional Policy 6

term changes of these indicators may be caused by re-definitions, changes in the survey sample etc. and should be interpreted with caution.

• Several indicators, such as innovation expenditures or sales shares with new-to-market products, are affected by business cycle developments.

• The EIS also includes of some indicators that are likely to develop towards a saturation level in some developed countries, such as broadband penetration rate or ICT expenditures. An interpreta-tion of short term country trends for these indicators will demand some knowledge on the shape of the country-specific diffusion curve and the country specific saturation level, i.e. interpretation is not straightforward.

• There are some indicators that show high short-term fluctuations. This is especially true for early stage venture capital.i

As the various cycles and developments that affect changes in the EIS indicators may not run in paral-lel between countries, one should exert much caution in the comparison of trends between countries, before jumping to the conclusion that a country is “falling behind” or “moving ahead” with respect to any average trend.

Multicollinearity Some indicators are highly correlated so that they essentially measure the same latent innovation de-terminant. As a result, this determinant is assigned excessive weight, to the benefit of those countries that are well positioned in this field. Ceteris paribus these countries are given better positions in the overall ranking. An example pertains to the indicators with a strong focus on a country’s performance in high-tech industries. As high-tech export performance highly correlates with employment as well as R&D in high- and medium-tech manufacturing, this indicator (without a higher level of sectoral disag-gregation) would not add much new information to the existing set of indicators. The fact that innova-tion can also take place outside the high-tech sectors, with similar importance for a country’s competi-tiveness, is not reflected in the EIS. Another example for a highly correlated group are the three indica-tors related to patents (EPO patents, USPTO patents and triad patents; for example, the 2003 correla-tion between these three indicators is 0.73, 0.84 and 0.89 respectively. Their correlation with the two other indicators in this group, community trademarks and industrial designs, is lower, but still (an un-healthy?) 0.2 to 0.5).

The “more is better” assumption Without exception, all indicators suppose that “more is better”. This, however, is an heroic assump-tion. Take the example of indicator 2.4 Share of enterprises receiving public funding for innovation. If more were truly better, the optimal situation would be to have 100 % of enterprises receive public funding, which would however be hardly compatible with any efficiency criterion. Nevertheless, this situation would earn a country top mark in this indicator. Similar arguments can be brought against quite a few other expenditure-related indicators (2.2, 2.3, 3.3, 3.4, and 3.5) - high spending on ICT equipment could also be a waste of scarce resources, for example (as probably happened in the late 1990’s). In principle, this reasoning also holds true for other “ratio-indicators”: from an economic point of view, a population with 100 % tertiary education is certainly not “optimal”. To make a long state-ment short: most variables arguably have some “optimal” value (which additionally can differ between countries), expansion beyond which can be inefficient or even counter-productive.

Page 7: How Not to Measure Innovation

JOANNEUM RESEARCH – Institute of Technology and Regional Policy 7

Outliers A satisfactory solution also has yet to be found for outliers. Especially when it comes to structural indi-cators (e.g. high-tech exports as a proportion of total exports), very small countries frequently (and due to singular historical specificities) show outlier-like values, which can seriously distort their ranking. For instance the indicator 4.2 Exports of high technology products as a share of total exports for Malta lays 223 % above the EU-average. It would thus be helpful to combine such results with detailed back-ground information on the features of the respective (national) innovation system that may affect such a result.

Statistical issues And last, but not least, some indicators may well raise doubts as to their capacity for international comparisons. This applies chiefly to indicators based on the data of the Community Innovation Survey (CIS), where we need to question the extent to which national findings can be reasonably used for in-ternational comparisons. Although the CIS data produced are based on harmonised survey question-naires, to a certain extent the results between CIS3 and CIS4 (or CISlight) were not comparable. For instance the share of enterprises in Denmark receiving public funding has increased by 100 % between CIS3 and CIS4. Other examples include the share of SMEs innovating in-house, which has increased in Italy within 2 years by 50 %, while in Austria it decreased by 50 %; or the share of SMEs using organ-isational innovations, with a doubling in Denmark as opposed to a 25 % decrease in Germany, and all of it in the two years between CIS3 and CIS4. Thus, many of the survey results can be explained more by statistical artefacts than by economic or innovation-related factors. However, one should not forget that the results of the CIS are used for one quarter of the EIS indicators.

3.2 The 25 Indicators of the EIS Nevertheless, the EIS indicators are an important tool for assessing the way a country copes with the challenges stemming from shifts in economic structures and the competitive environment. Being aware of the limits of its usefulness, the range of interpretation as well as the pitfalls of the results it can pro-vide a good overview of trends in some innovation-related issues over a given period of time.

The following Figure 1 shows the 1999-2006 time series of the 25 indicators included in the SII:

Figure 1: The time series of the 25 indicators 1.1 New S&E graduates aged 20-29

0

5

10

15

20

25

30

1999

2000

2001

2002

2003

2004

2005

2006

1.2 Population with tertiary education aged 25-64

0

5

10

15

20

25

30

35

40

45

1999

2000

2001

2002

2003

2004

2005

2006

1.3 Broadband penetration rate

0

5

10

15

20

25

30

35

1999

2000

2001

2002

2003

2004

2005

2006

1.4 Participation in life-long learning

0

5

10

15

20

25

30

35

1999

2000

2001

2002

2003

2004

2005

2006

Page 8: How Not to Measure Innovation

JOANNEUM RESEARCH – Institute of Technology and Regional Policy 8

1.5 Youth education attainment level (>upper sec.education)

40

50

60

70

80

90

10019

99

2000

2001

2002

2003

2004

2005

2006

2.1 Public R&D expenditures

0.0

0.2

0.4

0.6

0.8

1.0

1.2

1999

2000

2001

2002

2003

2004

2005

2006

2.2 Business R&D expenditures

0.0

0.5

1.0

1.5

2.0

2.5

3.0

3.5

1999

2000

2001

2002

2003

2004

2005

2006

2.3 Share of medium-high-tech and high-tech R&D

60

65

70

75

80

85

90

95

100

1999

2000

2001

2002

2003

2004

2005

2006

2.4 Share of enterprises receiving public funding for innovation

0

5

10

15

20

25

30

35

40

45

1999

2000

2001

2002

2003

2004

2005

2006

3.1 SMEs innovating in-house

0

5

10

15

20

25

30

35

40

45

50

1999

2000

2001

2002

2003

2004

2005

2006

3.2 Innovative SMEs co-operating with others

0

5

10

15

20

25

30

35

1999

2000

2001

2002

2003

2004

2005

2006

3.3 Innovation expenditures

0.0

0.5

1.0

1.5

2.0

2.5

3.0

3.5

4.0

1999

2000

2001

2002

2003

2004

2005

2006

3.4 Early-stage venture capital

0.00

0.05

0.10

0.15

0.20

0.25

0.30

0.35

1999

2000

2001

2002

2003

2004

2005

2006

3.5 ICT expenditures

4

5

6

7

8

9

10

11

1999

2000

2001

2002

2003

2004

2005

2006

3.6 SMEs introduced organisational innovation

0

10

20

30

40

50

60

70

80

90

1999

2000

2001

2002

2003

2004

2005

2006

4.1 Employment in high-tech services

0

1

2

3

4

5

6

1999

2000

2001

2002

2003

2004

2005

2006

4.2 Exports of high technology products

0

5

10

15

20

25

30

35

40

45

1999

2000

2001

2002

2003

2004

2005

2006

4.3 Sales of new-to-market products

0

2

4

6

8

10

12

14

16

18

1999

2000

2001

2002

2003

2004

2005

2006

4.4 Sales of new-to-firm products

0

5

10

15

20

25

30

35

1999

2000

2001

2002

2003

2004

2005

2006

4.5 Employment in medium-high and high-tech manufacturing

0

2

4

6

8

10

12

1999

2000

2001

2002

2003

2004

2005

2006

Page 9: How Not to Measure Innovation

JOANNEUM RESEARCH – Institute of Technology and Regional Policy 9

5.1 EPO patents per million population

0

50

100

150

200

250

300

350

40019

99

2000

2001

2002

2003

2004

2005

2006

5.2 USPTO patents per million population

0

50

100

150

200

250

300

350

1999

2000

2001

2002

2003

2004

2005

2006

5.3 Triad patents per million population

0

10

20

30

40

50

60

70

80

1999

2000

2001

2002

2003

2004

2005

2006

5.4 Community trademarks per million population

0

100

200

300

400

500

600

700

800

900

1000

1999

2000

2001

2002

2003

2004

2005

2006

5.5 Community industrial designs per million population

0

50

100

150

200

250

300

350

400

450

1999

2000

2001

2002

2003

2004

2005

2006

SII Summary Innovation Index

0.0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1999

2000

2001

2002

2003

2004

2005

2006

EUUSROBGUKSKSISEPTPLNLLVLULTITIEHUFRFIESELEEDKDECZ

Source: European Innovation Scoreboard 2006 & 2007 Database

The reader may forgive the cluttered appearance of the diagrams, but their main task is to transport some general messages:

First, most indicators exhibit quite smooth developments with little variability in the period 1999-2006. This is not wholly surprising, as indicators like 1.2 population with tertiary education aged 25-64, but also 2.1 public R&D expenditures are not expected to change much from year to year. Highest variabil-ity, incidentally, show indicators which are taken from the CIS (Community Innovation Survey); these include the SME-indicators 3.1, 3.2, 3.3 and 3.6, as well as indicators 4.3 und 4.4 (turnover with inno-vative products). Probably the main reason for this is the characteristic of the CIS as a random sample (and the sampling methods, which quite frequently are subject to some minor or not-so-minor revi-sions).

The second observation shows that developments are shared across countries: typically, the indicators show a common trend, 1.3 roadband penetration rate being the most extreme example. Moreover, for some indicators a slight convergence can be observed (most prominently in the combined SII, but also for the indicators 1.5 youth education attainment and 4.1 employment in high-tech services).

Lastly, the indicators exhibit startling differences between countries: in 2006, the indicator 5.5 Com-munity industrial designs per million population was 0.9 in Romania, but 240 in Denmark (the mini-mum for the “old” members was Greece’s 3.14). The 2005 value in Luxembourg, at 398, was even larger, but dropped to below 100 in 2006 (such “pockets of variability” also cast some doubt on the reliability and “trustworthiness” of some indicators). Such differences – if they are real and not merely artefacts based on (a lack of) “statistical quality” – certainly not only reflect genuine “distances” be-tween countries along common dimensions, but also – and probably more so – cultural differences. In all probability, maybe caused by legal differences, industrial designs have largely different meaning in different countries. The “common dimension” is not really so common after all – as a consequence, the indicator does not measure the same underlying latent variable.

Page 10: How Not to Measure Innovation

JOANNEUM RESEARCH – Institute of Technology and Regional Policy 10

Besides cultural differences, the contamination of the indicators with statistical artefacts must be taken into account: the EU-average of indicator 3.4 early-stage venture capital has supposedly more than doubled, from 0.023 in 2005 to 0.053 in 2006. This rise, however, has been exclusively driven by ven-ture capital in the UK – which is credited with a whopping 0.224 % of GDP in 2006, a fivefold in-crease over the 0.047 % of the year before.

A closer look reveals quite startling variability even in indicators which at first glance look quite smooth: take, for example, the above-mentioned 1.2 population with tertiary education aged 25-64. This is a 40 year moving average and therefore should change only slowly indeed. A look at the num-bers, however, shows a different picture:

Table 2: Population with tertiary education aged 25-64, selected countries

1999

2000

2001

2002

2003

2004

2005

2006

Denmark 26.5 25.8 28.1 29.0 31.8 32.4 33.5 34.7Lithuania 42.6 41.8 22.4 21.9 23.2 24.2 26.3 26.8Austria 14.3 14.2 14.5 16.9 16.5 18.8 17.8 17.6EU -- 19.4 19.6 19.9 20.8 21.7 22.4 23.0United States 35.8 36.5 37.3 38.1 38.4 37.0 39.0 --

Source: EIS

Supposedly, Austria’s share of population with tertiary education has risen from 14.5 % in 2001 to 16.9 % in 2002 and further to 18.8 % in 2004, before dropping to 17.8 % only one year later. Other countries show similar patterns. This, however, is simply not credible: for the share to rise by only 1 percentage point in one year, the fraction of people with tertiary education within the “new” 25-year age cohort would have to be about 3 times the average of last year’s overall share (assuming age co-horts of roughly equal size). If this seems hardly probable, a drop of one percentage point, as happened in Austria between 2004 and 2005, is mathematically impossible. Of course, with heavy inward and outward migration, such changes would be possible – except that such migration simply has not taken place. As we can see, even such an “uncontentious” and easy-to-measure (at least this is what one would suspect) indicator can show very surprising behaviour. And, as Table 2 shows, funny behaviour with respect to this indicator is not restricted to Austria (Lithuania, for example, lost half of its popula-tion with tertiary education – within a year).

4. THE SUMMARY INNOVATION INDEX (SII)

After last chapter’s descriptive treatment of the set of 25 indicators included in the EIS, we now turn to the one indicator which regularly elicits public attention. In following an ambitious target, the EIS tries to gauge a country’s innovative performance (and its position vis-a-vis other countries) with the help of a single number, the Summary Innovation Index (SII). This composite indicator, which is an aggregate of the 25 indicators, is the basis for an international ranking (see Table 3), which regularly commands quite some political attention. The way this aggregate is brought about, however, - and the criticism it has already faced - merits some closer inspection.

An almost natural reaction of almost any country when confronted with their position in international rankings (probably apart from the ones at the top) involves intentions to do better in the future – and/or a (formalistic) justification (excuse?) as to why the ranking does not reflect one’s “real” position.

Within the framework of the SII, this aim – to do better – can be achieved via three strategies:

Page 11: How Not to Measure Innovation

JOANNEUM RESEARCH – Institute of Technology and Regional Policy 11

1. the inclusion of new indicators which are favourable to a country, and/or the elimination of unfavourable indicators;

2. changes to the weighting method, i.e. the relative revaluation of favourable indicators or the devaluation of unfavourable ones;

3. and last (but not least) the endeavour to improve one’s position in some or all of the indica-tors.

Strategies #1 and #2 involve some „manipulation” of the SII’s structure; as such, they can be pursued retrospectively. In contrast, Strategy #3 works only “pro futuro”. In the following, the possibilities of stratgegy #2 and the feasibility of strategy #3 will be explored; strategy #1 depends crucially on the availability of alternative or additional indicators, which will not be explored in the present context.

Normalisation The SII is a composite indicator: it represents, in a single number, a multitude of diverse indicators; the details of this aggregation have far-reaching consequences for the final results.

The first “problem” has to do with indicators measured at different scales: “fractional indicators” (like the share of people with tertiary education) have values between 0 and 100 % by definition (and even then, their likely values can vary widely: the share of R&D expenditures to GDP is typically around 1-4 %, whereas the share of medium- and high-tech R&D is around 60-90 %); other indicators are un-bounded (at least in principle, like the number of patent applications). Therefore, to be comparable the indicators used in the composite index have to be normalised.

The EIS uses the so-called MinMax-normalisation, which for some indicator relates the difference between a country’s value and the value of the lowest-ranked country to the indicator’s range, the dif-ference between the best and the worst-performing country:

)min()max()min(

xxxx

Y iNi −

−=

Yi is the country i’s normalised value, xi is the country’s value for the original scale, min(x) and max(x) are the respective minimum and maximum values which have been attained for this indicator across all countries. In this way, the normalised indicators Yi

N have values lying between 0 (laggard, xi

= min (x)) and 1 (leader, xi = max (x)).

The problem of weighting The next “problem” concerns weighting: as mentioned, the SII aggregates 25 indicators into a single number; in this process, all indicators receive the same weightii wi = 1 / 25 = 0.04. This means that all indicators are „equally important“ – which is a rather heroic assumption given the indicators’ vastly different coverage (just try to compare the share of the population with tertiary education and venture capital for start-ups). Certainly, a weighting regime which is beyond all doubt is unattainable; the EIS, however, takes a very short cut and refrains from any attempt at defining such a weighting regime. This is justified in the Report on European Innovation Scoreboard 2005 (Sajeva et al., 2005) with the aim to „…keep the weighting as simple as possible“. To be fair, they did not reach this decision without giv-ing it some thought: the authors examined a total of 4 different weighting methods:

• the budget allocation method, which determines the relative weights of the dimensions and their indicators on the basis of an expert survey;

• equal weighting; • Factor analysis to correct for „overlaps“ between indicators; and

Page 12: How Not to Measure Innovation

JOANNEUM RESEARCH – Institute of Technology and Regional Policy 12

• Benefit of the doubt, using weights which are tailored to each country so that this country attains the best possible ranking.

They conclude that the ranking is quite independent from the weighting method; therefore, they chose the simplest one. Nevertheless, they drew some criticism (see e.g., Grupp and Mogee (2004) or Schu-bert (2006)). Using linear programming, Schubert calculated weighting vectors intended to rank each country as good and as bad as possible. In doing so he showed that it was possible to rank every coun-try within a very wide range, as the following Figure 2 shows:

Figure 2: Attainable rankings using different weighting regimes

0

2

4

6

8

10

12

14

16

FI SE DE BE AT FR LU UK NL DK IE IT PT ES EL

Source: Schubert (2006)

The top positions are quite unambiguous: independently of the weighting regime, Finland (FI) and Sweden (SE) are among the top three countries; also, Greece’s (EL) position at the bottom seems well established (although with optimal weighting rank 9 is attainable). On the other hand, Luxembourg (LU) can take literally any position.

In this example, the weighting vectors were unrestricted, i.e. they could assume any valueiii. To achieve more realistic results, Schubert repeated his calculations with restricted weighting vectors, forcing the individual weights into the range of wi = [0.02, 0.06], i.e. between about half and double the value of equal weighting (wi = 1 / 26 = 0.0385). Figure 3 shows the result:

Page 13: How Not to Measure Innovation

JOANNEUM RESEARCH – Institute of Technology and Regional Policy 13

Figure 3: Attainable rankings using different (but restricted) weighting regimes

0

2

4

6

8

10

12

14

16

FI SE DE BE DK NL UK AT FR IE IT LU ES PT EL

Quelle: Schubert (2006)

The result is much clearer now: positions at the top and at the bottom are unambiguous and independ-ent from the weighting vector. In between, the exact positions are – within rather narrow margins - somewhat manipulable. Altogether, this result seems rather close to the conclusion that the weighting regime is of secondary importance only – vindicating the decision of Sajeva et al. (2006) to „…keep the weighting as simple as possible“ (criticism of which, by the way, was Schubert’s (2004) and Grupp and Mogee’s (2004) point of departure).

5. THE SII 2007

The following section will show to some extent that the result, i.e. a country’s position in the SII is not only determined by the selection of indicators but also by the availability of the data used. Additionally we will analyse the results of SII 2007 published in February 2008 in more detail.

For this the following Table 3 gives first of all an “at the glance” overview of the official country rank-ings at the date of publication. Hence, the overall 8th rank of Austria in 2005 was based on the data available in 2005, the 13th rank in the following year resulted from the data available in 2006 and so forth.

Page 14: How Not to Measure Innovation

JOANNEUM RESEARCH – Institute of Technology and Regional Policy 14

Table 3: Country ranking on the basis of the SII 2001 2003 2004 2005 2006 2007

1 S FI JP SE SE SE2 US SE SE CH CH CH3 FIN CH FI FI FI FI4 UK IS US JP DK IL5 J UK CH DK JP DK6 DK DK DE US DE JP7 NL DE DK DE US DE8 IRL NL IS AT LU UK9 D IE UK BE UK US

10 FIN FR BE NL IS LU11 A BE FR UK NL IS12 B NO NL FR FR IE13 L AT IE IS AT AT14 E IT NO LU BE NL15 IRL LU AT IE IE FR16 GR ES EE NO NO BE17 P PT SI IT SI CA18 EL IT EE CZ EE19 ES SI EE AU20 PT HU IT NO21 LU ES ES CZ22 BG CY CY SI23 CZ PT MT IT24 LT LT LT CY25 HU CZ HU ES26 MT BG HR MT27 SK PL SK LT28 EL SK PT HU29 LV EL LV EL30 CY LV PL PT31 RO MT EL SK32 PL RO BG PL33 TR TR RO HR34 TR BG35 LV36 RO37 TR

EU15-average; = EU25-average;------- EU27-average

Source: EIS 2001, 2003, 2004, 2005, 2006, 2007

The current SII 2007 exhibits an interesting, because piecewise linear trend (Figure 4): the top posi-tions show a uniformly falling trend, from Sweden (0.73) to Ireland (0.49). The next 5 positions (from Austria’s 0.48 to Belgium’s 0.47) are extremely close (all five of them are separated by less than two of the top-ranked countries are separated on average); considering measurement errors at the level of the indicators (and the issue of the “correct” weighting regime), these countries can be viewed as virtually on par.

After this group, and a shift of about 0.1 points, the downward trend becomes somewhat stronger, fol-lowed by a once more rather level section (EU-positions 20-25). Not counting Turkey, Latvia and Ro-mania come last among the EU27.

Page 15: How Not to Measure Innovation

JOANNEUM RESEARCH – Institute of Technology and Regional Policy 15

Figure 4: SII 2007 - Ranking

1

2

3 4 5

6

7 8 9 10 11

12 13 14 15 16 17 18 19 20 21 22 23 24 2526 27

0.00

0.10

0.20

0.30

0.40

0.50

0.60

0.70

0.80

SE CH FI IL DK JP DE UK US LU IS IE AT NL FR BE EU CA EE AU NO CZ SI IT CY ES MT LT HU EL PT SK PL HR BG LV RO TR

SII 2

007

EU27 countriesother countriesEU average

Quelle: EIS 2006

All this implies that a country’s exact position has to be interpreted with some caution: positions within the middle (Austria to Belgium) and lagging groups (Hungary to Bulgaria) seem rather exchangeable, with the exact position depending probably more on “luck” than “ability” (all that seems undisputed is their belonging to the respective group of countries). The positions at the very top and bottom, how-ever, look quite “settled”.

The SII over time Over time, the SII shows – in accordance with the evidence of rather subdued intertemporal variability at the level of the indicators – a rather “smooth” trend (c.f. Figure 5):

Page 16: How Not to Measure Innovation

JOANNEUM RESEARCH – Institute of Technology and Regional Policy 16

Figure 5: SII ranking over time SII over time: Ranking over time:

2003

2004

2005

2006

2007 ΔRank

SE 1 1 1 1 1 0FI 2 2 2 2 2 0DK 3 3 3 3 3 0DE 4 4 4 4 4 0UK 5 5 5 6 5 1LU 8 6 6 5 6 3IE 7 9 7 7 7 2AT 11 11 11 11 8 3NL 9 8 9 10 9 2FR 10 10 10 9 10 1BE 6 7 8 8 11 5EE 12 13 12 12 12 1CZ 14 14 14 14 13 1SI 15 12 13 13 14 3IT 13 15 15 15 15 2CY 17 17 17 17 16 1ES 16 16 16 16 17 1MT 18 18 18 18 18 0LT 21 22 21 19 19 3HU 20 20 20 21 20 1EL 19 19 19 22 21 3PT 24 21 22 20 22 4SK 22 23 23 23 23 1PL 23 24 24 24 24 1BG 25 25 25 25 25 0LV 26 26 26 26 26 0RO 27 27 27 27 27 0

0.10

0.20

0.30

0.40

0.50

0.60

0.70

0.80

0.9020

03

2004

2005

2006

2007

SII

SEFIDKDEUKLUIEATNLFRBEEECZSIITCYESMTLTHUELPTSKPLBGLVRO

Source: EIS

For most countries, the time series of their SII is quite smooth and typically homogeneous (either fal-ling or rising over the 5 years). Moreover, SII values exhibit a trend “towards the mean”: above-average countries losing and below-average countries gaining over time. For the ranking this implies that changes in positions are moderate as well. Such changes occur almost exclusively within groups of countries with similar SII values, for example the group AT/BE/FR/IE/NL which always can be found at the positions 7-11 (if in slightly varying order). Similar can be said for the group CZ/EE/ES/IT/SI viewing for positions 12-16.

In this inter-temporal comparison, a very important aspect has to be borne in mind when interpreting (apparent) changes in the published, up-to-date rankings: according to Figure 5, Austria’s position has barely changed during the last 5 years (the ranking has changed, from 11 to 8, but as we have seen this rather due to “indicator noise” than to real advances). This, however, is in blatant contrast with the public excitement surrounding Austria’s gaining the 5th rank among the European countries in 2005 (from 10 to 5), only to be followed by a fall back to the 9th rank in the subsequent year (see Table 3). The reason for this (apparent) contradiction can be found in the way the time series of Figure 5 was calculated: this was done using the current (i.e. 2008) list of indicators and weighting method, as well as – and this is much more important – on the basis of now (2008) available data. For any year of the time series, therefore, their values can depart from the “official” SII of the selfsame year, if

Page 17: How Not to Measure Innovation

JOANNEUM RESEARCH – Institute of Technology and Regional Policy 17

a) the list of indicator is changed (as happened between 2002 und 2003 as well as 2004 and 2006);

b) the weighting method is changed (the weight of each indicator changes with the number of dimensions and their respective indicators; again, such changes happened in 2002/03 and 2004/05);

c) the availability of data improves.

This last reason points to the fact that, for example, the “official” SII2005 (s. Table 3) was calculated on the basis of data which were available in 2006 (when the SII2005 was published), which almost certainly included other (older) data than the SII for the year 2005 as calculated in 2008 and presented in Figure 5.

This last point explains the discrepancy between rank #5 in the official EIS2005 as it is shown in Table 3 and rank #11 as shown in Figure 5: even if neither list of indicators nor weighting method have changed from 2005 to 2006, what has changed is the available data. Especially the 6 indicators which are based on the CIS exhibit striking differences: for the EIS2005, data from the CISlight (2002) were used; now (in 2008), when calculating the SII for the year 2005, data from the CIS4 (2004) can be used. From CISlight to CIS4, however, almost all variables have lost in value (like indicator 4.4, which has fallen from 10.6 in CISlight to 5.4 in CIS4). The differences between CIS light and CIS4, however, are not plausible; rather, they point to problems in comparability, brought about by the fact that – due to quite diverse reasons – the CISlight almost certainly overestimated most indicators.

This means, among other things, that Austria’s #5 ranking among European countries in the EIS2005 was almost certainly “too good” and that the current #8 position reflects more adequately Austria’s “true position”. Compared with the time series of EIS values, this #8 position even represents an im-provement over earlier positions. In the light of the above discussion, however, this conclusion must be a qualified one: the SII values of the countries in Austria’s peer group are quite similar; it seems safe to assert that Austria is in the “middle” group, somewhere between ranks #7 and #11; also, Austria seems safely ahead of the “new” members and the Mediterraneans, but also solidly behind the “top group” of the northern EU members. In short, Austria is squarely in the medium group; whether on position 7, 9, or 11 depends more or less on chance. What, now, could Austria do to improve its position? Well, the possibilities for direct intervention are limited.

6. WHAT CAN POLICY DO?

The logical starting point would be indicator 3.3 innovation expenditure as a share of turnover, which so far has not been available in Austria (in Austria’s SII it is therefore not included). If this indicator is above average, its inclusion would be good for Austria’s ranking – on the other hand, a modest value would be bad. Ironically it could be argued that this indicator has to be afforded a good inspection before it is measured.

On the level of the other 25 indicators, a “wait-and-see” approach will not witness much change, as most of them show similar trends in almost all countries. There are, however, four indicators where Austria is clearly (i.e., more than 20 %) below average (see Appendix). Two of them do not lend them-selves to short-term improvement, as they represent the top of the education system, which could only be changed over longer periods: these are 1.1 New S&E graduates per 1000 population aged 20-29 and 1.2 Population with tertiary education per 100 population aged 25-64. As regards the first indicator, it probably under-estimates Austria’s endowment with engineering brainpower, as it only includes terti-ary degrees (thus not counting the Austrian speciality of the popular Technical Highschools, which

Page 18: How Not to Measure Innovation

JOANNEUM RESEARCH – Institute of Technology and Regional Policy 18

offer engineering training at the secondary level of education). The peculiarities of the second indicator have been mentioned already (see section 3).

The worst indicator for Austria by far is 3.4 early-stage venture capital. At the same time, it is proba-bly the one which could be influence most easily in the short term: in 2005, this amounted to 0.012 % of GDP in Austria. Raising this to the EU-average of 0.022 % would amount to some 35 Mio. € - a not outrageously large sum (if such a sum would find enough worthy recipients, however, is a different matter).

Below average are almost all indicators of the dimension “output – application” (4.1– 4.5), covering high tech and new products. Besides problems concerning data quality (2 of the 5 indicators are taken from the CIS) and ambiguity of definitions (the classification of sectors into low-, medium- and high-tech is not wholly convincing)iv, this dimension clearly shows the dilemma faced by would-be pro-active politics: what should be done to raise the share of high-tech exports, and in a period of just a few years? Ironically speaking, the only sure way to achieve this would be to ban all low- and medium tech exports – a patently absurd recipe.

Another good example of this kind is also provided by indicator 4.1 Employment in high-tech services (% of total workforce) which is somewhat below average in Austria. On the other hand, the share of employees in the Austrian tourism sector is above average. Although reducing this share to the EU-average would raise Austria’s unemployment rate by some 2 percentage points (which would be bad, although it would result in an unemployment rate still below the EU average), it would ceteris paribus raise the share of employment in high-tech services (which would be good for Austria’s innovation ranking). Of course, this example sounds absurd – but it serves to highlight the fact that “shares” or “ratios” comprise two terms, the numerator and the denominator. A ratio is low if either the nominator is low or the denominator is large. A ratio which is deemed “too low” can be brought about by either component.

In a nutshell, many indicators reflect structures which have evolved over longer time periods; they cannot be changed overnight (nor should they be)v.

Just two (or three) of the indicators could quickly and directly be influenced by politics: 2.1 public R&D expenditures and 2.4 Share of enterprises receiving public funding for innovation (plus 3.4 early-stage venture capital, c.f. above). Both indicators have recorded impressive gains over the last years to the extent that in the meantime, Austria shows (markedly) above-average values. Of course, further expansion would be possible – although it will increasingly be confronted by the problem of efficient allocation.

As for indicators of business innovation, these are of course shaped by political conditions. However, the way from political action to entrepreneurial interpretation is probably rather tortuous; in addition, when compared with other countries, the Austrian system of incentives in this respect is already quite generous.

In short: possibilities for (short-term) influence by politics on the variables measured by the EIS are limited – only 2 of the 25 indicators can be influenced directly. Many are the result of long-term devel-opments, which can be guided but not determined by the political system.

7. SOME FINAL REMARKS

To the extent that the included indicators can unambiguously be measured and compared, the EIS thus gives a snapshot of a country’s position regarding “innovation”; as such, it provides valuable informa-tion to policy makers. Due to a certain lack of such unambiguity, the EIS also reflects country-specific

Page 19: How Not to Measure Innovation

JOANNEUM RESEARCH – Institute of Technology and Regional Policy 19

idiosyncrasies (of its education system, its economic structure, etc.), which can and should not hastily be labelled “good” or “bad”. In the longer term, the EIS is in all probability a feasible and practical tool to judge on the progress of countries towards “innovativeness”. What is equally probable, however, is the futility (maybe bordering on the irresponsible) of using the EIS in the attempt at gauging short-term developments – as we have seen, in the short term the indicators tend to be swamped by measurement noise. The logical conclusion, therefore, would be for the EU to stop the annual ritual of the innovation ranking, especially its tendency to provide as “up-to-date” scores as possible – using arguably incom-patible data for international comparisons as a result (for some countries, some indicators are based on more recent data than for other countries. The result is an unrealistic variability in the ranking – a vari-ability which almost completely disappears when the EIS is calculated on the basis of older, more com-patible data).

From a methodological point of view, calculating a composite index is the most critical part of the EIS exercise. One of the main arguments for the SII is that it is “… as useful tool for policy making … and consequently much easier to interpret than trying to find a common trend in many separate indicators” (Sajeva et al. 2005, Arundel et al. 2008). However, in our opinion the opposite is true. The use of an aggregate, single figure is low for practical policy purposes as it is neither immediately transparent nor does it imply any specific action to be taken. Neither the SII nor the single indicators are related to typical areas of policy intervention. In general, innovation policy has to take into account market envi-ronments, technology developments and specific barriers to innovation at different types of enterprises very carefully (SMEs, start-ups, large companies, export vs. home market orientation etc.) in order to design an appropriate policy intervention. Thus, innovation policy has to take into account the specific institutional and economic environment of a country. It would be helpful to combine the publication of the EIS results with detailed background information on the features of the respective (national) inno-vation system that may affect the EIS results. As an example for this approach the OECD Economic Survey for Austria (OECD 2007a) can be mentioned. By using different data sources and by taking specific policy initiatives into account the OECD Report results into the following policy recommenda-tions for making innovation policy in Austria more effective. The headlines of these recommendations are the following:

• Simplifying the institutional framework for innovation policy. • Ensure efficiency of innovation subsidies. • Improve product market competition. • Improve conditions for start-ups. • Ease immigration of skilled workers and researchers. • Improve human capital development.

Except the last recommendation no hint for these policy interventions can be found on the basis of the EIS. A ranking may attract short term attention by the general public at the cost of oversimplifying the issue and lowering its connection to policy making.

On the other hand, much of the European Commission’s powers with respect to national policies rest with “naming and shaming”, which, in the case of the reception of the EIS in Austria, for example, has arguably fulfilled its role (even if the initial uproar has not always been followed by sober and sensible policies). Whether this approach will work for long, however, is an open question.

Page 20: How Not to Measure Innovation

JOANNEUM RESEARCH – Institute of Technology and Regional Policy 20

8. REFERENCES

Arundel, A., H. Hollanders (2008), Innovation Scoreboards: indicators and policy use; in: Nauwelaers, C., R. Wintjes (eds.) (2008), Innovation Policy in Europe, Edward Elgar.

European Commission (2000), Structural indicators; COM(2000) 594 final. European Commission (2005), European Innovation Scoreboard 2005. Comparative Analysis of Innova-

tion Performance; European Trend Chart on Innovation Giering, C., Metz, A. (2004), Laboratory for Integration. Opportunities and Risks of the “Open Method of

Coordination”; Reform Spotlight 2004/02 Goetschy, J. (2003), The open method of coordination and EU integration; Studienbrief 2-010-0204,

Hochschulverbund distance learing. Grupp, H., M.E. Mogee, (2004), Indicators for national science and technology policy: how robust are

composite indicators?, Research Policy 33, pp 1373-1384 Innometrics (2008), EIS 2007 - Comparative analysis of innovation performance. OECD (2007a), Economic Survey – Austrian, Paris. Régent, S. (2002), The Open Method of Co-ordination: A supranational form of governance?; Interna-

tional Institute for Labour Studies, DP/137/2002. Sajeva, M., D. Gatelli, S. Tarantola, H. Hollanders (2005) Methodology Report on European Innovation

Scoreboard 2005; European Trend Chart on Innovation, European Commission. Schibany, A., G. Streicher, H. Gassler (2006), Österreich im Kontext des Lissabon- und Barcelonaprozes-

ses; InTeReg Research Report Nr. 52-2006, Joanneum Research. Schibany, A., G. Streicher, H. Gassler (2007), Der European Innovation Scoreboard: Vom Nutzen und

Nachteil indikatorgeleiteter Länderrankings; Joanneum Research, InTeReg Reserach Report Nr. 65-2007.

Schibany, A., H. Gassler, G. Streicher (2007), High Tech or Not Tech; InTeReg Working Paper Nr. 35-2007, Joanneum Research.

Schubert, T. (2006), How Robust are Rankings of Composite Indicators when Weights are Changing, Manuskript Fraunhofer ISI.

Veugelers, R. (2007), Developments in EU Statistics on Science, Technology and Innovation: Taking Stock and Moving Closer to Evvidence-based Policy Analysis, in: OECD (2007), Science, Technology and Innovation Indicators in a Changing World, Paris.

Page 21: How Not to Measure Innovation

JOANNEUM RESEARCH – Institute of Technology and Regional Policy 21

Appendix

Figure 6: Austria’s relative position in the EIS

EIS 2007 Innovation performance (relative to EU27)

76

77

96

136

110

115

137

96

198

150

85

0

6

98

142

89

68

71

86

102

152

121

145

205

191

INPUT - Innov ation Driv ers1.1 New S&E graduates aged 20-29

1.2 Population w ith tertiary education aged 25-641.3 Broadband penetration rate

1.4 Participation in life-long learning 1.5 Youth education attainment lev el (>upper sec.education)

INPUT - Know ledge creation2.1 Public R&D ex penditures

2.2 Business R&D ex penditures2.3 Share of medium-high-tech and high-tech R&D

2.4 Share of enterprises receiv ing public funding for

INPUT - Innov ation & entrepreneurship3.1 SMEs innov ating in-house

3.2 Innov ativ e SMEs co-operating w ith others3.3 Innov ation ex penditures

3.4 Early -stage v enture capital3.5 ICT ex penditures

3.6 SMEs introduced organisational innov ation

OUTPUT - Application4.1 Employ ment in high-tech serv ices

4.2 Ex ports of high technology products4.3 Sales of new -to-market products

4.4 Sales of new -to-firm products4.5 Employ ment in medium-high and high-tech manufacturing

OUTPUT - Intellectual property5.1 EPO patents per million population

5.2 USPTO patents per million population5.3 Triad patents per million population

5.4 Community trademarks per million population5.5 Community industrial designs per million population

Source: EIS 2007

i See below. ii To be more precise, it is the 5 intermediate indicators which receive equal weights: the 25 indicators are grouped into 5 intermediate indicators (“dimensions”: innovation driver, knowledge generation, innovation

Page 22: How Not to Measure Innovation

JOANNEUM RESEARCH – Institute of Technology and Regional Policy 22

and entrepreneurship, application, intellectual property). As these dimensions contain 4, 5 or 6 indicators, there is not an absolutely equal weighting at the level of the 25 indicators. iii However, Schubert’s results are somewhat unclear: if really ANY weighting was possible, a weight of 1 for some indicator and 0 for every other indicator would be possible as well. Under these circumstances, the ranking for this “composite” indicator would be the same as the ranking for the indicator with weight 1. However, the indicator least favourable for Finland and Sweden is 4.4 Share of turnover with new-to-the-firm products, where Finland comes 12th and Sweden 13th – and these would at least be the worst positions that Finland and Sweden could be assigned using this very peculiar weighting vector. Similarly for Greece, whose second place in 3.3 innovation expenditure should determine the lower bound of its best position in the composite indicator. The range of attainable positions would therefore be even larger than presented by Schubert. Maybe the weighting vector was not as “free” as Schubert reports (requiring each indicator to receive at least a small positive weight). Whatever the reason, Schubert shows impressively that with clever weighting (and a bit of “goodwill”) quite diverse results can be achieved. iv For example, the OECD classification does not allow for „high-tech niches” in medium-tech sectors like the manufacture of machinery, which is quite important in Austria. On the other hand, mere assemblage of hardware modules is included in the high-tech sector “manufacture of computers”. v The oft-cited example of Finland, which experienced a well-nigh revolutionary structural change within just a couple of years, is one that should not be recommended for emulation frivolously: it was the result of a deep recession following the collapse of the Soviet Union, and with it the markets of Finland’s low-tech exports. Moreover, Finnish high-tech is to a large degree represented by a single enterprise – not a text-book example of a balanced portfolio.