FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... ·...

110
R S S O Reliability, Safety, and Security Studies at NTNU FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA Inger Lise Johansen Inger Lise Johansen Foundations and Fallacies of Risk Acceptance Criteria

Transcript of FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... ·...

Page 1: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

R SS OReliability, Safety, and Security Studies at NTNU

FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA

Inger Lise Johansen

Inger Lise Johanse

n F

oundatio

ns and F

allacies of Risk A

cceptan

ce Criteria

Page 2: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk
Page 3: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

Dept. of Production and Quality Engineering

Address: Visiting address: Telephone: Facsimile:

N-7491 Trondheim S.P. Andersens vei 5 +47 73 59 38 00 +47 73 59 71 17

TITLE

Foundations and Fallacies of Risk Acceptance Criteria

AUTHOR

Inger Lise Johansen

SUMMARY

The objective of this study is to discuss and create a sound basis for formulating risk acceptance criteria. Risk acceptance criteria are quantitative or qualitative terms of reference guiding decision making on acceptable risk. Examining central concepts, metrics and approaches, a comprehensive fundament for the setting and use of risk acceptance criteria is provided. First, foundational issues on risk, probability and risk acceptance are presented. Subsequently, the strengths and fallacies of individual and societal risk metrics are inquired, followed by a like examination of principal and practical approaches to setting risk acceptance criteria. The ability of risk acceptance criteria to offer sound decision support is finally questioned.

REPORT NO.

ROSS (NTNU) 201001

ISBN

978-82-7706-228-1 DATE

2010-02-23

SIGNATURE

Marvin Rausand

PAGES/APPEND.

105

KEYWORD NORSK

RISIKOVURDERING

AKSEPTKRITERIER

RISIKOMÅL

KEYWORD ENGLISH

RISK ACCEPTANCE

ACCEPTANCE CRITERIA

RISK METRICS

Page 4: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk
Page 5: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

i

Preface

This report is written Autumn 2009, at the Norwegian University of Scienceand Technology (NTNU), Department of Production and Quality Engineering.For their devoted guidance, special gratitude is directed to Professor MarvinRausand and Post doc Mary Ann Lundteigen.

I also wish to thank Vivi Moe for allowing me to use her piece ’Tuppen ogLillemor’ in fronting this publication

Trondheim, Norway, 18th February 2009

Inger Lise Johansen

Page 6: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

ii

Page 7: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

iii

Abstract

The purpose of this study is to discuss and create a sound basis for for-mulating risk acceptance criteria. Examining the strengths and pitfalls ofcentral concepts, metrics and approaches, the thesis provides a compre-hensive fundament for the setting and use of risk acceptance criteria. Thefindings are derived from integration and critique of pioneering and stateof the art literature.

Risk acceptance criteria are quantitative or qualitative terms of refer-ence, used in decisions about acceptable risk. Acceptable risk means a risklevel that is accepted by an individual, enterprise or society in a given con-text. Our willingness to accept risk depends on the benefits from taking therisk, the extent to which the risk can be controlled and the types of conse-quences that may follow. Acceptable levels of risk are hence never absolutenor universal, but contingent on trade-offs and contextual premises.

Fatality risk can be expressed by individual or societal risk metrics.To capture the distribution and totality of risk posed by a particular sys-tem, both individual and societal risk acceptance criteria are necessary.Suitability for decision support and communication, unambiguity and in-dependence are qualities that should be sought. The main expressions ofindividual risk are IRPA and LIRA. IRPA expresses the activity-based riskof a specific or hypothetical person, and is particularly suited for decisionsconcerning frequently exposed individuals. While exposure is decisive toIRPA, LIRA assumes that a person is permanently present near a hazardoussite. LIRA is thus location-specific and pragmatically reserved for land useplanning.

Common societal risk metrics are FN-curves, FAR and PLL. FN-criterion lines uniquely distinguish between multiple and single fatalityevents, but are accused for providing illogical recommendations. PLL is asimpler metric, particularly suited for cost-benefit analyses. Overall accep-tance limits are seldom expressed by PLL, since exposure is not reflected.In contrast, FAR is defined by unit of exposure, enabling realistic compari-son with predefined criteria. If injury frequency surpasses that of fatalities,a metric related to injury or ill health may provide the most proper crite-rion. All metrics are fraught with assumptions and difficulties, necessitatingawareness amongst practitioners.

Various approaches are developed for setting risk acceptance criteria.A distinction is drawn between fundamental principles, deductive methodsand specific approaches. Utility, equity and technology offer principal cri-teria to be used alone or as building blocks. While utility-based criteria

Page 8: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

iv

concern the overall balance of goods and bads to society, the principle ofequity yields an upper risk limit of which none of its members shouldsurpass. Technology-based criteria see acceptable risk attained by the useof state of the art technology, possibly at the expense of considerationsof cost-effectiveness and equity. Amongst deductive methods for solvingacceptable-risk problems are expert judgment, bootstrapping and formalanalysis. There are two questionable assumptions underlying bootstrappingmethods; that a successful balance of risks and benefits is achieved in thepast, and that the future should work in the same way. Formal analysesavoid this bias towards status quo, demanding explicit trade-off analysesbetween the risk and benefits of a current problem. The advantages offormal analysis are openness and soundness, while a pitfall concerns thedifficulty of separating facts and values.

Specific approaches to setting risk acceptance criteria are based ona combination of fundamental principles and deductive methods. Mostcultivated is the ALARP-approach, capitalizing the advantages of formalanalysis and all principal criteria. Requiring risk to be as low as reason-ably practicable, ALARP provides conditional rather than absolute criteria;uniquely capturing that risk acceptance is a trade-off problem. Problemsare yet addressed, notably in resource intensiveness and the imprecise no-tion of gross disproportionality. Conceptually close, but practically far fromALARP, is ALARA. Whereas upper criteria represent the start of ALARP-discussions, they serve as the endpoint in ALARA. Arguments of reason-ableness are considered already built into the strict upper limits of ALARA.Strict criteria are provided also by the GAMAB-approach, requiring newsystems to be globally as good as any existing system. GAMAB can beseen as learning-oriented bootstrapping, but may reject developments onan erroneous standard of reference. While GAMAB is technology-based,the MEM-approach uses the minimum IRPA of dying from natural causesas reference level. Little impetus is given for reducing risk beyond thisstatic requirement. Different from these risk-based approaches is the pre-cautionary principle, intended for situations where great uncertainty makescomparison with a predefined metric meaningless. The precautionary prin-ciple has been attacked from many stands, but is concluded to offer avaluable guide in the absence of knowledge.

The extent to which risk acceptance criteria offer sound decision sup-port is finally questioned. The various approaches differ with respect toconsistency, practicality and ethical implications, to the degree risk reduc-tion is encouraged and risk acceptance reflected. The choice of metricsimplicitly or explicitly affects how these issues are resolved. Interpretingrisk and probability as subjective constructs are not seen to threaten the

Page 9: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

v

validity of risk acceptance criteria. What may cause a problem, are regu-lators and practitioners understanding risk acceptance criteria as objectivecut-off limits. The overall conclusion is that acceptance criteria offer sounddecision support, but only if authors and users understand the assumptionsand limitations of the applied metrics and approaches. There is a call forapplied research on the role of the industries and the government in for-mulating and complying with risk acceptance criteria. As environmentalconsequences are omitted from the study, the lack of a sound basis forformulating environmental risk criteria urges further research.

Page 10: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk
Page 11: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

Contents

1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.4 Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2 Clarification of concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.1 What is risk? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2.1.1 The meaning of risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.2 Defining risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.3 What can go wrong? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.4 How likely is it that it will happen? . . . . . . . . . . . . . . . . . . . . . . . . 12

2.4.1 Classical theory of probability . . . . . . . . . . . . . . . . . . . . . . . 132.4.2 The relative frequency theory . . . . . . . . . . . . . . . . . . . . . . . 142.4.3 Subjective probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

2.5 If it does happen, what are the consequences? . . . . . . . . . . . . . . . 162.6 Safety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182.7 Risk assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192.8 Risk acceptance criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212.9 Acceptable risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

2.9.1 One accepts options, not risks . . . . . . . . . . . . . . . . . . . . . . . 222.9.2 Acceptability is not tantamount to tolerability . . . . . . . . . 232.9.3 Factors influencing risk acceptance . . . . . . . . . . . . . . . . . . 232.9.4 Risk acceptance is a social phenomenon . . . . . . . . . . . . . . 24

3 Expressing risk acceptance criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . 273.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

3.1.1 Risk metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273.1.2 To whom it may concern . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

3.2 Aspects in the choice of risk metrics . . . . . . . . . . . . . . . . . . . . . . . 29

Page 12: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

viii Contents

3.2.1 Generic requirements according to NORSOK Z-013 . . . . 293.2.2 Pragmatic considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . 303.2.3 Past and future observations . . . . . . . . . . . . . . . . . . . . . . . . 30

3.3 Individual risk metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313.3.1 Individual risk per annum- IRPA . . . . . . . . . . . . . . . . . . . . 313.3.2 Localized individual risk-LIRA . . . . . . . . . . . . . . . . . . . . . . 33

3.4 Societal risk metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353.4.1 FN-curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373.4.2 Potential loss of life- PLL . . . . . . . . . . . . . . . . . . . . . . . . . . 403.4.3 Fatal accidental rate- FAR . . . . . . . . . . . . . . . . . . . . . . . . . . 41

3.5 Other . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423.5.1 Risk matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433.5.2 Loss of main safety functions . . . . . . . . . . . . . . . . . . . . . . . 443.5.3 Safety integrity level- SIL . . . . . . . . . . . . . . . . . . . . . . . . . . 453.5.4 Injury and ill health . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

4 Deriving risk criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474.2 Fundamental principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

4.2.1 Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484.2.2 Equity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494.2.3 Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494.2.4 An alternative principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

4.3 Deductive methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504.3.1 Expert judgment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514.3.2 Bootstrapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514.3.3 Formal analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

4.4 Specific approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 574.4.1 ALARP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 574.4.2 ALARA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 644.4.3 GAMAB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 664.4.4 MEM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 684.4.5 The Precautionary principle . . . . . . . . . . . . . . . . . . . . . . . . . 70

5 Concluding discussions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 755.1 Are risk acceptance criteria feasible to the decision maker? . . . . 76

5.1.1 Non-contradictory ordering of alternatives . . . . . . . . . . . . 765.1.2 Preciseness of recommendations . . . . . . . . . . . . . . . . . . . . . 775.1.3 A binary decision process . . . . . . . . . . . . . . . . . . . . . . . . . . 785.1.4 Risk acceptance criteria simplify the decision process . . 78

5.2 Do risk acceptance criteria promote good decisions? . . . . . . . . . 795.2.1 The interpretation of probability to risk acceptance

criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

Page 13: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

Contents ix

5.2.2 Ethical implications of risk acceptance criteria . . . . . . . . 815.2.3 Compliance or continuous strive for risk reduction? . . . . 835.2.4 One accepts options, not risks . . . . . . . . . . . . . . . . . . . . . . . 84

5.3 What we really are looking for . . . . . . . . . . . . . . . . . . . . . . . . . . . . 855.3.1 Overall conclusions and recommendations for further

work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

Page 14: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk
Page 15: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

1

Introduction

1.1 Background

Risk is ubiquitous. Of all the risks we face in everyday life, only a selection getsto preoccupy our worried minds. Some are unconsciously undertaken, others arewe willing to live with, and yet a few provoke heated demonstrations or banning.While children of the postwar period were afraid of DDT and nuclear powerplants, citizens of the new millennium are concerned with nanotechnology andglobal warming. Other risks have ever been trivialized, like those of bicycling towork or pursuing the perfect tan. Risks do in other words differ in acceptability;through times, across people and situations. And when decisions involving riskare to be taken, risk acceptance is the measure.

The risk of swine flu has recently bombed the headlines of the daily press.At its onset this summer, people were faced with the choice of canceling along planned holiday. Following the pathological development, a topical de-cision problem has been whether to get vaccinated. Although pandemics arean affair of the state, these are ultimately personal decisions of individuals’willingness to live with the risk of swine flu. In contrast is another current de-cision problem, which is the governmental settlement on future development ofoil and gas production in Lofoten, Norway. A variety of actors have expressedtheir opinions, disagreeing on the relative importance of state economy andworldwide energy scarcity, in comparison with environment, tourism and fish-ery preservation. The debate has been further precluded by imprecise factualstatements, like Havforskningsinstituttet’s claim that between 0 and 100 % ofthe stock of fry might be lost (Teknisk Ukeblad, 2009). How can a decision bemade in this case? Part of the solution lies in the reply of the Department ofEnvironment, demanding comprehensive risk analyses to be performed. Riskanalyses are widely used to support discussions related to industrial and societaldevelopments. To evaluate the results of risk analysis, a term of reference isneeded.

Page 16: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

2 1 Introduction

Risk analysis RAC

Decision about risk

Economy, regulatory requirements, public opinion, interest groups and other influencing factors

Figure 1.1. Risk decisions

Several laws and regulations prescribe the use of risk acceptance criteria forevaluating new or existing hazardous systems. Offering a level of comparisonfor the results of risk analysis, decisions are reached on the grounds that riskprospects shall not be unacceptably high. However, Norwegian authorities donot give any guidance on how to establish such criteria. In comparison, the UKHealth and Safety Executive (HSE) is leading in the field, offering a consistentframework for practitioners to follow. The value of risk acceptance criteriahas also been questioned, recently in a suite of papers by Terje Aven andhis coworkers at the University of Stavanger. As implied in the Lofoten-caseand illustrated in Figure 1.1, decisions on risk involve complex opposites notdetermined by risk alone. A considerable influence on like discussions is thepioneering work of Fischhoff et al. (1981). While three decades have pastsince they first evaluated decision methodologies for acceptable-risk problems,academic and pragmatic difficulties still remain. These are manifested in thesomewhat questionable practice on the continental shelf, calling for enhancedknowledge on the fundamentals of formulating risk acceptance criteria.

1.2 Objectives

The purpose of this thesis is to discuss and create a sound basis for formulatingrisk acceptance criteria. From this overall goal, five lower level objectives arededuced:

1. Give a description of the various approaches to setting risk acceptancecriteria related to harm to people, and discuss their basis and applicability.Both individual and societal risk shall be covered.

Page 17: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

1.3 Limitations 3

2. Present and discuss the main concepts and quantities used to formulate riskacceptance criteria.

3. Give a description of approaches to setting environmental risk criteria4. Discuss conceptual problems related to risk acceptance criteria. This

should include a discussion of the objective/subjective interpretations ofprobability- and also risk.

5. Compare the use of risk acceptance criteria in two or more selected ar-eas. These shall include the Norwegian offshore oil and gas industry andmaritime transport.

Following agreement with the supervisor, task 3 and 5 will not be covered inthe thesis.

1.3 Limitations

In pursuing the overall goal, the thematic coverage is confined to three out of fiveobjectives. This is partly due to the limited time frame of project execution, butalso because of the thorough examination urged by the central objectives. Thefocus is thus restricted to harm to people and fatalities in particular. Excludingthe third task is unfortunate, since environmental criteria are required, but poorlyunderstood in the offshore and maritime industries. Due to the distinct nature ofenvironmental risk, the reader should beware that the findings are not directlytransferable to environmental applications.

Disregarding the fifth task of performing a comparative study has pragmaticimplications. Since no sectors have been explicitly examined, the findings aregeneric and decoupled from practical and contextual constraints. Paradoxically,this serves as an advantage as well as a limitation. On the positive side, experi-ence transfer may be sought over a wide range of areas. On the other hand, thisnecessitates practical interpretations. A methodological weakness is further-more induced, since sector-specific considerations lie implicit in the appliedliterature. While UK and Dutch contributions mainly concern nuclear powerand land-based process industry, the offshore oil and gas industry is by andlarge the center of Norwegian researchers’ attention. Although a generic focusis chosen, the thesis is thus knowingly biased towards the Norwegian offshoreindustry.

The study is purely theoretical, as its results are derived from integrationand critique of pioneering and state of the art literature. Risk acceptance is awide concept to which many theorists have contributed. During the literatureselection process, emphasis has been on gaining fundamental understanding ofkey concepts, rather than presenting radical ideas or advanced formulas. Thereader is therefore not required any previous knowledge on the subject. A finallimitation owes to the diversity of contributions, leaving it neither possible, nor

Page 18: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

4 1 Introduction

desirable to deduce bombastic conclusions. Rather, the most important findingsare the nuances and contrasts pinpointed in the discussions.

1.4 Structure

Chapter 2 is devoted to clarifying the basic concepts central to this thesis.Understanding the concepts of risk, probability and risk acceptance is prereq-uisite for creating a sound basis for formulating risk acceptance criteria. Themeanings, definitions and implications of these and related terms are explored,particularly aided by Kaplan & Garrick (1981) and Fischhoff et al. (1981).

Subsequently, chapter 3 addresses the second objective by elaborating themain concepts and metrics used to formulate risk acceptance criteria. First, theconcepts of individual and societal risk are introduced, followed by qualita-tive considerations in the choice of risk metrics. The main part follows with athorough examination of characteristics, pitfalls and strengths of common indi-vidual and societal risk metrics. Central to the discussions on societal risk is theliterature review of Ball & Floyd (1998), while the annex of NORSOK Z-013N(2001) provides useful assistance throughout the chapter. While focus hithertohas been on fatality risk, a brief final section presents alternative metrics ofrisk acceptance.

In chapter 4, the first objective is sought through the presentation of variousapproaches for deriving risk acceptance criteria. The fundamental principles ofutility, equity and technology described in the R2P2-report of HSE (2001b) arefirst introduced. Subsequently, the three methods of Fischhoff et al. (1981) forsolving acceptable risk-problems are presented; expert judgment, bootstrappingand formal analysis. Based on the presentation of these generic principles andmethods, the specific approaches of ALARP, ALARA, GAMAB, MEM and theprecautionary principle are examined. Due to its methodological prominence,most attention is devoted to the ALARP-approach of HSE (1992).

Finally, chapter 5 raises a set of conceptual problems regarding the feasibilityof risk acceptance criteria. The concluding discussion follows the thread ofAven and his coworkers (Aven & Vinnem, 2005; Aven et al., 2006; Aven, 2007;Abrahamsen & Aven, 2008), questioning the ability of risk acceptance criteria toprovide sound decision support. First, issues of user friendliness are addressed.The meaning of probability, ethics, risk reduction and risk acceptance to variousformulations of risk acceptance criteria are problematized thereafter. Althoughthis chapter explicitly addresses the fourth objective of the thesis, the readershould note that conceptual discussions make an integral part of each previouschapter. By taking a panoramic view on these discussions, overall conclusionsand recommendations for further work are finally given.

Page 19: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

2

Clarification of concepts

Most of the terms central to this report are used in everyday life. The reader willtherefore have an intuitive understanding of what ’risk’, ’probability’ and ’riskacceptance’ mean. Unfortunately, this intuitive appeal yields inconsistenciesand confusion if their understandings are taken for granted. The importance ofproperly defining risk concepts is stressed by Ale et al. (2009), exemplifyingthat many risk management frameworks fail to define the probability they arereferring to. This is unfortunate, since probability can be interpreted in verydistinct ways, leading risk assessments in different directions. Not only areimplicit interpretations troublesome within the community of scientific riskassessment. Fischhoff et al. (1981) contemplate that misunderstandings betweenlay people and experts partly arise from inconsistent definitions of risk, callingfor currently used definitions to be made explicit, assumptions to be clarifiedand identification of cases that push them to their limits.

Following the advice of Fischhoff and co., this chapter is devoted to clar-ifying the focal concepts of risk, probability, safety and risk acceptance. Thepromise of clarification may appear ironic, as the examination shows that thereis a wide range of interpretations offering different insights on the subject. Thisowes to what Breugel (1998) denotes a reductionist approach to risk, meaningthat risk phenomena exhibits a variety of aspects that each has been studied indetail by engineers, economists, sociologists, psychologists, philosophers andso on. While some definitions are explicitly adopted to this report, other con-cepts are left undefined, clarifying that a problem can and should be seen froma variety angles.

2.1 What is risk?

In a recent study of risk management frameworks, Ale et al. (2009) find thedistinction between the description of risk and the risk concept unclear. Bydescription, one typically means a defining phrase, while a concept can be

Page 20: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

6 2 Clarification of concepts

understood as a cognitive unit of meaning (Wikipedia). The distinction is soughtcaptured in the following, by first grasping the broader meaning of risk, andthen presenting a selection of scientific risk definitions. Amongst these is thepioneering contribution of Kaplan & Garrick (1981) emphasized, as the chosendefinition for this report.

2.1.1 The meaning of risk

The question ’what is the risk?’ pops up if your computer automatically blocksthe downloading of a scientific article (or a less sanctimonious feature of theweb). The answer you get is neither the possibilities of your computer catchinga virus, nor the consequences of such an attack. Instead, a description of thesources of unrequested downloading is provided, followed by a suite of mea-sures to take in precaution. Sociologist Garland (2003) similarly asks ’what isrisk?’, to which he replies that the notion has a broad range of meanings, fromwhich all are conditioned on the risk of something to someone. What theyhave in common, is that risk is always understood in the context of uncertainty;if a future outcome is certain to happen, one does not face a risk. EconomistHolton (2004) adds a second explanatory factor, in that speaking of risk givesmeaning only if someone is aware about the outcomes, i.e. that they are ex-posed. Although many have negative connotations to risk, the outcomes mayequally well be positive. Pointing to as distinct situations as launching a newbusiness, skydiving and initiating a romantic relationship, Holton clarifies thatrisk is a general concept giving meaning to all situations involving the factorsof uncertainty and exposure. Owing to this, one can speak of accident, political,health and financial risk, as well as everyday risk of missing the school bus,being surprised by bad weather or downloading computer viruses.

Risk, hazards and relativity

Although risk encompasses a wide range of phenomena, an important distinc-tion is drawn between risk and hazard. A hazard is a source of physical damage,that may be converted into actual delivery of loss or damage, but exists onlyas a source. Risk on the other hand, entails the possibility of this conversion(ISO/IEC Guide 51, 1999). The same distinction holds for risk and threat, whichis conceptually reserved for situations of intentional acts (Salter, 2008). In com-mon usage, the notions of risk, hazards and threats are often mixed, like thecomputer pop-up providing a list of threats to answer ’what is the risk?’, or anewspaper disclosing that a popular toy is a risk. According to Garland (2003),this is a critical misconception, as risk- in contrast to hazards- never exist out-side our knowledge of them. Since risk is concerned with the future, it can onlybe ’known’ in probabilistic terms. This view is held not only by social scien-tists, but also by risk analysts as Kaplan & Garrick (1981). They acknowledge

Page 21: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

2.1 What is risk? 7

that risk is dependent on what you know and what you do not know, and isthus relative to the observer. Adams (2003) makes an important observation inthat you gain knowledge of risk in different ways. While some are directly per-ceivable (car accidents), others are known through science (cholera), whereasa third group of virtual risk even escape the agreed knowledge of scientists(global warming). The cornerstone in Adams’ reasoning is that the meaningand management of risk depend on how knowledge about the future is inferred.

Interpretations of risk in science

Accepting that risk is a property of the future and distinct from hazards, mostpeople agree that it does not exist in a physical state of today. But from here,there is substantial disagreement on whether risk is an objective feature of theworld or thought constructs only. As seen in section 2.4, this is closely relatedto how one interprets the concept of probability. Further to the extreme, somesocial scientists deny that risk can be quantitative. Following the reasoning ofphilosopher Campbell (2005), this is an erroneous claim, as risk at least can beassigned comparative quantities. Some risks are clearly high (e.g. jumping ofthe Elgeseter bridge in Trondheim), while others are evidently lower (crossingthe same bridge by foot). It is indeed a rightful claim that only some risks (ifany) can be given precise quantities. But according Campbell, this does notmean that a statement of ’low risk’ is any less quantitative.

Hovden (2003) summarizes four positions to risk in science:

� Rationalists see risk as a real world phenomenon to be measured and esti-mated by statistics and controlled by scientific management.

� Realists interpret risk as objective threats that can be estimated indepen-dently of social processes, but may be distorted through frameworks ofinterpretation.

� Constructionists claim that nothing is a risk in itself. What we understandto be a risk is a product of cultural ways of seeing.

� Middle positions between realists and constructionists see risk as an ob-jective threat, that can never be seen in isolation from social and culturalprocesses.

Risk perception

Central in the realist and middle positions is the concept of risk perception, un-derstood as subjective responses to hazard and risk (Breakwell, 2007). Amongthe factors influencing risk perception are voluntariness of exposure, immediacyof effects, personal control and catastrophic potential. Risk perceptions differ

Page 22: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

8 2 Clarification of concepts

from analytical estimates of risk, as reported in the numerous studies of laypeople and risk analysts’ evaluations. Amongst the pioneers of the field wereTversky & Kahneman (1974), revealing that lay inferences about probabilityand risk are distorted through biases of representativeness, availability and an-choring of initial guesses. Table 2.1 is extracted from the studies of Slovic(1987), illustrating that lay people perceive voluntary and personally control-lable risk as lower than told by ’objective’ estimations, while involuntary riskof catastrophic potential are typically exaggerated. Such a comparison gives nomeaning in the constructionist and rational approaches, as the former deny theexistence of any objective risk, while the latter claim that there is nothing butan objective such.

Activity or technology College students ExpertsNuclear power 1 20Smoking 3 2Pesticides 4 8Motor vehicles 5 1Alcohol neverages 7 3Surgery 11 5X-rays 17 7Electric power (non-nuclear) 19 9

Table 2.1. Differences in lay people and experts’ ordering of perceived risk. Rank 1 repre-sents the most risky activity or technology (extracted from Slovic (1987)).

Contingency

Renn (2008) concludes that common for all epistemologic positions to risk, isthe contingency between possible and chosen action. If the future was indepen-dent of today’s activities, the term risk would give no meaning. This explainswhy risk can never be zero, unless we stop performing the activity in question.But then, another activity is initiated, from which yet another risk is introduced.Due the contingent nature of our actions, accident risk means the possibilitythat an undesirable state of reality may occur as a result of natural events orhuman activities. Accepting this general conception, the next step is to explorehow risk can be properly defined.

2.2 Defining risk

There is no generally agreed definition of risk. Paraphrasing Aven (2009), thevariety of existing definitions can be sorted into three categories:

Page 23: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

2.2 Defining risk 9

� A. Risk is an event or a consequence

� B. Risk is a combination of probability and expected loss

� C. Risk is expressed through events/consequences and uncertainties.

A risk definition of the first category is that of Klinke & Renn (2002, p.1071):

The possibility that human actions or events lead to consequences thatharm aspects of things that human beings value.

The definition captures many of the aspects presented in the previous section,principally that risk is related to an uncertain future and outcomes which hu-mans care about. The reader should note that ’possibility’ is used instead of’probability’, implying that focus is on the consequences that might occur, ratherthe likeliness of this happening. Considering the inverse relationship betweenconsequence severity and frequency of occurrence, such definitions may biasdiscussions on risk towards catastrophic, but unrealistic outcomes (Woodruff,2005).

Representative for the third category is Aven (2003, p.176), defining riskas:

Uncertainty of the performance of a system (the world), quantified byprobabilities of observable quantities.

According to this definition, risk is a quantitative means for capturing uncer-tainty1. Following Aven, it is meaningless to talk of uncertainty in the outcomeof risk calculations, as risk itself is a measure of uncertainty. This is quitea radical view compared to definitions of the second category, understandinguncertainty as a measure of confidence in your presentation of risk. In thiscategory, we find the frequently cited contribution of Kaplan & Garrick (1981,p.13), defining risk as the answer to three questions:

1. What can happen?2. How likely is it that it will happen?3. If it does happen, what are the consequences?1 Uncertainty may be defined as ’something not certainly and exactly known’ (Webster,

1978). Following Aven (2003, p.178), uncertainty is ’lack of knowledge about the perfor-mance of a system (the ’world’), and observable quantities in particular’. However, mostpractitioners speak of risk in situations where probabilities can be assigned, reservinguncertainty for situations where probabilities are undefined or uncertain (Douglas, 1985).In risk analysis, a distinction is made between two main types of uncertainty; aleatory(randomness/stochastic variations) and epistemic (scientific; due to our lack of knowl-edge about the world). While aleatory uncertainty is irreducible, the latter decreases withincreasing knowledge (NASA, 2002)

Page 24: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

10 2 Clarification of concepts

Pro

babi

lity,

log

Ø

Level of damage, log XP

roba

bilit

y, lo

g P

Level of damage, log X

Risk CurveP

P=0.9

P=0.1

Figure 2.1. Risk curves (adopted from Kaplan & Garrick (1981))

To answer the questions, Kaplan and Garrick suggest the making of a list as inTable 2.2. Each line, i , is a triplet of a scenario description, si , the probability,pi , and consequence measure, xi , of that scenario. Including all imaginablescenarios, the table is the answer to the three questions and therefore the risk.Formally, risk is defined as a set of triplets:

R D hsi ; pi ; xi i (2.1)

Acknowledging uncertainty in consequence and probability estimations, thedefinition is further refined into:

R D fhsi ; pi .�i ; xi/ig (2.2)

pi .�i / and pi .xi/ are the probability density functions for the frequency andconsequence of the i th scenario. Arranging the scenarios in order of increasingseverity and damage, (2.1) and (2.2) can be plotted as a single- or a family ofcurves respectively, as shown in Figure 2.5. Kaplan and Garrick stress that itis not the mean of the curve, but the curve(s) itself which is the risk. Whileindicating that a curve can be reduced to a single number, this is prescribedwith caution. In their opinion, a single number is not a big enough concept tocommunicate risk, as is often done by claims of risk being ’probability timesconsequence’. Since this equates low probability/high damage scenarios withhigh probability/low damage scenarios, Kaplan and Garrick prefer risk to bedescribed as ’probability and consequence’. The latter is adopted for this report,partly because it is most common in current risk analysis practices (Rausand& Utne, 2009). More important is it that risk acceptance is far more complexthan a compound number of probability and consequence can tell, which isdiscussed in section 2.8. By illustration, the acceptability of traffic accidentsand a core nuclear meltdown is hardly the same, even though the mean valuesof the risk curves might equate the two.

Aven (2003)’s conception of risk can be questioned on a similar basis,namely because a single number is believed to represent uncertainty about thefuture, as well as our calculations of it. With reference to Adams (2003), this

Page 25: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

2.3 What can go wrong? 11

Scenario Likelihood Consequences1 p1 x1

s2 p2 x2

. . .

. . .sn pn xn

Table 2.2. The risk table (adopted from Kaplan & Garrick (1981))

may not be a problem for risk perceived directly or through known science,but is likely to perplex virtual risk of great epistemic uncertainty. Also theconsequence-oriented definition of Klinke & Renn (2002) is inadequate for ourpurpose, as it may distort the trade-off considerations section 2.8 shows charac-terize acceptable risk-problems. Wu & Apostolakis (1990) see a major problemof non probabilistic theories in the lacking of a rational rule for combiningrisk information. As pointed out by Aven (2009), like definitions cannot assistin concluding if a risk is high or low compared to other options. Kaplan andGarrick’s idea of risk as a set of triplets therefore provides the basis for thisreport, calling for an examination of its three constitutional elements.

2.3 What can go wrong?

To Kaplan & Garrick (1981), the answer to this question is a list of all identifiedscenarios. For illustration, they point to a pipe break, noting that this scenarioactually represents a whole category of pipe ruptures of various kinds andsizes. Since the notion of scenario is somewhat imprecise, ’hazardous event’ israther preferred, following the example of Kjellén (2000) 2. Hazardous eventsare commonly restricted to initiating occurrences, meaning that they do notrepresent the actual damage that might follow. In this thesis, a hazardous eventis conceptualized as the first significant deviation from normal operation, thatmay lead to harm if not controlled. The risk of each event can be illustrated bythe bowtie-diagram of Figure 2.2, visualizing a spectrum of possible causesand consequences of a specific scenario. This presentation format was launchedwithin the petroleum company Shell, providing conceptual aid in identifyingsafety barriers for preventing critical events and mitigating their consequences(Chevreau et al., 2006).

Accepting that risk is a set of triplets, the overall risk is given by thecollection of bow-ties for all imaginable scenarios. Approaching risk in this2 Some standards use the terms ’accidental event’ (NORSOK Z-013N, 2001), ’unwanted

event’ (NS 5814, 2008) or initiating event. As there seem to be no generally acceptedterms or definitions, the process of properly and uniquely defining identified events islikely to be clouded.

Page 26: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

12 2 Clarification of concepts

Haz

ards

Con

sequ

ence

s

Barriers

Pr (AE)

AE

Figure 2.2. Bow-tie diagram (adapted from Chevreau et al. (2006).

manner has one fundamental weakness, namely that unidentified scenarios leadto underestimations of risk. Referring to an accident study showing that lessthan 60% of the scenarios were foreseen, Breugel (1998) notes that the mostcrucial part of risk analysis concerns the identification of accidents scenarios.This difficulty is recognized by Kaplan & Garrick (1981), admitting that sincethe number of possible scenarios in reality is infinite, a listing of scenarioswill always be incomplete. According to Kaplan and Garrick, this inherentweakness may be overcome by introducing an ’other’ category of unidentifiedscenarios, SN C1. The set of scenarios is thus logically complete, allowing oneto compensate for residual risk posed by unknown scenarios. Whether this isa satisfactory counterargument may be questioned, as research has shown thatthe main uncertainties in risk analysis still are related to the (in)completenessof identified events (HSE, 2003b).

2.4 How likely is it that it will happen?

The answer to this question is the probability of each hazardous event. But whatis probability? And is it distinct from frequency as an expression of likelihood?In common language, probability is a number between 0 and 1, while frequencyexpresses the number of events per time unit, having no upper restrictions(Wikipedia). Even though the other elements of Kaplan and Garrick’s definitionpose difficulties, these are minor to the dispute the previous centuries saw overthe interpretation of probability. Playing on the words of a classic film, Martz& Waller (1988) announce that ’Probability is a many splintered thing’. Somesee probability as a sound mathematical theory, others think of it as the odds ora feeling associated with the outcomes of a future event, yet some understandit as an experimental process of observing frequency of hits. While a man sawno objections in assigning a 67% probability that God exists (The Guardian,

Page 27: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

2.4 How likely is it that it will happen? 13

2004), others struggle to understand how one can repeatedly loose in Yatzigiven the same odds as the opponents.

The meaning of probability can be sought from three main stands; the clas-sical theory, the relative frequency theory and the theory of subjective proba-bility. About to decades ago, these were under considerable scrutiny regardingthe interpretation of probability in risk analysis, exemplified in the academiccorrespondence between Watson (1994) and Yellmann & Murray (1995). Theunderlying objective of these debates was whether probability, and hence risk,is an objective feature of the world, and the implications of this on risk analysis.Subsequently, researchers have concluded that what is important is not whatschool of thought you follow, but that the interpretation is chosen that best fitsyour purpose (Vatn, 1998).

2.4.1 Classical theory of probability

Up until the twentieth century, probability was by and large interpreted accord-ing to the classical theory, developed by mathematicians like Pascal and Laplacebetween 1654 and 1812 (Watson, 1994). Following this theory, probability isan objective property, derived from a set of equally likely possibilities and/orsymmetrical features. The symmetrical properties of a dice yield a 1

6probability

of throwing a 6, while drawing an ace of spades and a club nine from a packof cards have the equal probability of 1

52. Given a set of equally likely entities,

Pr.A/ is inferred by counting the proportion of favorable outcomes amongstthe total set of possible outcomes :

Pr.A/ D Na

N(2.3)

The probability Pr.A/ of an event is given a priori, with no need for experimen-tation (Papoulis, 1964). According Watson (1994), this is a satisfactory conceptin games of chance, but invalid for situations not fulfilling the assumption ofuniform possibilities. As this is certainly not true for either output or inputprobabilities of risk analysis, Watson rejects the classical theory as a basis forinterpreting risk analysis results. Also Yellmann & Murray (1995) agree thatthe classical theory is a too narrow viewpoint for analyzing accident risk. Froma generalist perspective, Papoulis (1964) criticizes the theory for being circular,as it in its own definition makes use of the concept being identified; conclud-ing that equally likely means equally probable. Pointing to the not so obviouspossibilities of giving birth to a boy and a girl, Papoulis further accuses theclassical theory for implicitly making use of the relative frequency interpreta-tion of the following section. In conclusion, one can say the classical theoryis of historical interest, but that its current use is limited to a small group ofproblems of which accident risk is not one.

Page 28: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

14 2 Clarification of concepts

2.4.2 The relative frequency theory

As a reaction to the classical theories of probability, the relative frequencytheory was developed by Von Mises around a century ago. Within this theory,the probability of A is the limit of the relative frequency of which A occurs,in a long-run series of repetitions (Papoulis, 1964):

Pr.A/ D limn!1

nA

n(2.4)

Since an experiment cannot be repeated for eternity, we never know the ex-act probability (Watson, 1994). Nevertheless, an objective definition is given,providing a precise meaning to probability and the ability to estimate it withconfidence. In cases where the classical theory applies, the relative frequencytheory will approach the same Pr.A/. The theories are still fundamentally differ-ent, as the relative frequency theory deduces Pr.6/ from repeatedly throwing adice and observing the occurrence of a 6, rather than through a priori inferenceof symmetrical properties. Like classical theorists, disciples of the frequen-tist school see probability as an objective feature. The difference is that whileclassical theory sees probability given from objective properties, the relativefrequency theory believes in the objectivity of measurements.

The obvious limitation of the relative frequency theory is that it only appliesfor situations for which relative frequency data exist. In its strictest form, proba-bility statements can only be given for experiments that are infinitely repeatableunder constant conditions. Watson (1994) is clear in his case, prophesying thatsince we cannot observe the future even once, Von Mises would probably denythat risk analysis have any meaning at all. Without passing such a dystopiansentence, Martz & Waller (1988) agree that in cases of plant specific low prob-ability/high consequence events, the relative frequency interpretation is bothpractically and philosophically inappropriate. Typical events are core meltdownin a nuclear plant and the structural breakdown of an offshore platform. Onthe other hand may events of high probability/low consequence, like trafficaccidents, give meaning in the sense of relative frequency. But even in thesecases the theory is likely to fail, since the framework conditions under whichaccidents occur hardly remain constant (Aven, 2007). For instance are traffic ac-cidents influenced by many transient factors, like technology of vehicles, trafficlane designs and regulations of speed and alcohol (Elvebakk, 2007). As theseare likely to change over past statistical observations, one can question whetherrelative frequencies offer reliable previsions of the future.

Most researchers agree that strict adherence to the relative frequency ap-proach is unsuitable in risk analysis. Yet, many claim that the theory providesthe basic meaning of probability. Amongst them are Yellmann & Murray (1995),asserting that relative frequency is the only rational basis for thinking aboutprobability. The rationale is provided by Vaurio (1990), arguing that probabil-ity should always be given a fractional interpretation, i.e. as a ratio of hits in

Page 29: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

2.4 How likely is it that it will happen? 15

the long run. In this sense, a claim of being 92.7 % sure that a decision is right,means that the fraction of right decisions in the long run is 0.927. Kaplan &Garrick (1981) acknowledge this fractional meaning, while stating that prob-ability is a distinct concept from observed frequency. In their view, studyingstatistical frequencies is the science of handling data, while probability is theart of handling lack of data. Owing to this, they suggest relative frequenciesassigned by experiments of thought, without ever having to perform a physicalexperiment. As this approach, named ’probability of frequency’, is conceptuallyfar from the original intention of Von Mises, it is not further discussed. What itdoes illuminate, is that how probability is understood and the way it is derived,is not necessarily a matter of the same thing.

2.4.3 Subjective probability

In the preface of De Finetti (1974)’s groundbreaking contribution to the theoryof probability, the following thesis is blown up:

PROBABILITY DOES NOT EXIST.

The sentence captures a radically different interpretation of probability; thesubjective, or Bayesian, school of thought. De Finetti postulates that probabilityis not endowed with some objective existence. Rather, probability is a subjectivemeasure for degree of belief, existing only in the minds of individuals. Beingsubjective, it follows that different people can assign dissimilar probabilities tothe same event. This does not imply that probabilities are meaningless in theopinion of De Finetti; it is perfectly meaningful for an individual to express hisbelief in the truth of an event with a number between 0 and 1. Neither does itmean that the rules of probability are invalid; the numbers associated with twoevents of different likeliness must still obey the axioms of Kolmogorov 3. Whatit does mean, is that probability is conditioned on an individual’s current state ofknowledge. As new knowledge is gained, individuals update their probabilities,intuitively or formally by Bayes’ theorem. This is not in the pursuit of a ’true’objective probability, but as a means for strengthening one’s own degree ofbelief.

Weather forecasting is a typical situation where subjective probabilities ap-ply. The probability of sunny weather can neither be claimed from symmetricalproperties, nor by repeated observations of the past. Instead, the meteorologistbases her previsions on professional knowhow and complex analyses, constantlyupdating prior knowledge in search for a strengthened degree of belief. Thisdoes not mean that she cannot use weather frequency data as a source of3 Kolmogorov’s axioms are a set of fundamental statements about probability of events in

a sample space. The axioms are functions, not meanings, of probability. They thus applyfor all interpretations of probability, putting no requirements on the relationship betweenprobability and real world phenomena (see De Finetti (1974)).

Page 30: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

16 2 Clarification of concepts

knowledge, or express her predictions in terms of expected frequencies. Martz& Waller (1988) praise the Bayesian theory as the broadest of all approaches,allowing accommodation of experience from relative frequencies as well assymmetrical assumptions. While De Finetti (1974) finds definitions of objectiveprobabilities useless per se, they are acknowledged as valid auxiliary devices.The readers should beware that if subjective and frequentistic assumptions arecombined, the resulting probability is still subjective (Vaurio, 1990).

The subjective interpretation of probability is the dominating approachamongst risk analysts of today (Rausand & Utne, 2009). This is mainly be-cause it applies to all types of events and uncertainties, in remarkable contrastto the classical and relative frequency theories (Wu & Apostolakis, 1990). Fol-lowing this line of thought, it gives meaning to talk about probability of bothstructural collapse and road accidents; analysts of Norse conviction may evencalculate the risk of ragnarock. Clearly favoring a Bayesian interpretation torisk analysis, Martz & Waller (1988) list nine reasons why it is philosophicallyand practically superior to the relative frequency theory. Also Watson (1994)prefers the subjective approach, but his conclusion differs alarmingly from thatof Martz and Waller. The inescapable problem of subjective probabilities asWatson sees it, is the provision of subjective advice that may be OK for per-sonal decision making, but lack the scientific objectivity desired for complexrisk decisions. Reasoning that the subjective theory of probability is philosoph-ically, but not politically satisfactory, Watson concludes that the outputs of riskanalysis should be advisory rather than ruling. This pinpoints the core of the de-bate between Watson and Yellmann & Murray (1995), i.e. what the subjectivityof probability means to the interpretation of risk analysis. For our purpose, theproblem can be re framed; what does the subjectivity of probability means tothe use of risk acceptance criteria? This question reappears in chapter 5.2, afterfirst having examined the meaning and derivation of risk acceptance criteria.

2.5 If it does happen, what are the consequences?

While positive outcomes are prominent in financial risk, accident risk is re-stricted to unwanted consequences (Aven, 2007). When speaking of conse-quences, Kaplan & Garrick (1981) thus refer to a measure of damage, xi , fora specific scenario. This does not mean that given its occurrence, a scenariowill deterministically lead to one corresponding consequence. Figure 2.3 illus-trates that there is a spectrum of possible consequences following an accidentalevent, varying in both severity and type. The damage should be regarded as avector quantity rather than a single scalar, of which each element is assigned acorresponding probability:

C D Œx1; x2::xn�Œp1; p2::pn� (2.5)

Page 31: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

2.5 If it does happen, what are the consequences? 17

The probability of each consequence depends on many factors, by which theeffectiveness of reactive barriers indicated in Figure 2.3 is of special impor-tance. Decisive is also the vulnerability of targets, understood as the abilityof objects to withstand the effects of a hazardous event (NS 5814, 2008). Atimely example of variances in vulnerability is the frequently used notion of’risk groups’ when discussing likely consequences of the swine flu.

C1

Cn-1

C3

C2

Cn

. .

P1

P2

P3

Pn-1

Pn

AE

Figure 2.3. Spectrum of consequences following an accidental event

If only one type of consequence is considered, the general expression of(2.5) is limited to a one-dimensional probability distribution, for example overthe number of fatalities. Kaplan and Garrick restrict themselves to exemplifyingthe consequences of loss of life and property. Consulting the large body of riskliterature, one might also consider environmental and economic consequences,or specify damage to national heritage or critical infrastructure. One can furtherlook at indirect or distal consequences, like damage to future generations, bio-diversity or political and social disruption (Breakwell, 2007). The Chernobylaccident of 1986 tragically illustrates that the severest of consequences maymanifest themselves in decades after an event. Some consequences may beimpossible to quantify, like damages to a natural reserve or loss of trust or rep-utation. And whether quantifiable or not, prioritizing between different types ofconsequences inevitably involves judgments that are ultimately moral(Douglas,1985).

Consequences related to loss of life are almost exclusively considered inregulatory and industrial setting of risk acceptance criteria. In recent years,increasing attention is devoted to environmental damage. Due to quantificationdifficulties and the ubiquity of environmental consequences, agreed practicesare left to be seen (Skjong et al., 2007). The reader should note that althoughnot formally included in an analyst’s triplet of risk, other consequences may beimportant determinants of the meaning and acceptability of risk to an individualor society.

Page 32: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

18 2 Clarification of concepts

2.6 Safety

Having defined risk and its vital elements, one can rhetorically flip the coin andask what constitutes safety. Intuitively, safety is understood as freedom fromharm or the opposite of risk; the lower the risk, the greater the safety. Seekinga general conception of safety, Möller et al. (2006) discover that the concept isunder-theorized. Analyzing the relationship between safety and risk, uncertaintyand control, they conclude that safety is more than the antonym of risk. Ofunderrated importance is epistemic uncertainty, as many feel safer given almostcertainty than in less certain outcomes involving lower risk. Another findingis the distinction between absolute and relative concepts of safety. While theformer implies that the risk of harm has been eliminated, the latter meansthat risk has been reduced or controlled to a certain level. Philosopher Næss(1985) was early engaged with this distinction, claiming that a Utopian searchfor absolute safety necessarily will conflict individuals’ quality of life.

Møller and his coworkers put the ethical and unattainable aspects of absolutesafety aside, focusing on the different stringency of the two concepts. Suggestingthat absolute safe is the most stringent condition of safety, relatively safe isreserved for situations under this benchmark. Below the lowest level of safetythat is socially acceptable, it is misleading to use the term safe. But how is sucha level constructed? The answer of Møller et al. is that it is more than some’opposite’ of an acceptable risk level; one is not necessarily safe just because therisk is acceptable. Aven (2009) follows this thread, while counter arguing thatsafety is the antonym of risk if the latter is defined in a broad sense. ’Safe’ canthen be defined by reference to acceptable risk, and ’acceptable risk’ can againbe rephrased as ’acceptable safety’, as in the ISO/IEC Guide 51 (1999, p.2)definition of safety: ’Freedom from unacceptable risk’. Notwithstanding this,the guide advices against using the words of safe and safety, on the groundsthat they convey no useful extra information. What is more, the reasoning ofAven rests on his own definition of risk, which was presented and discardedin section 2.2. Although discussions on acceptable risk are often framed as aquestion of how safe is safe enough, the fuzzy notions of safe and safety areused with caution in this report.

Related to safety is the term security, restricted to harm from intentionalacts like terrorism, sabotage or violence. Security is a broad concept, originallyconcerned with military and politically threats to state sovereignty (Barry et al.,1998). Central to security is the concept of ’threat agents’, meaning an actorhaving the intention and capacity of inflicting damage to a vulnerable object(Salter, 2008). As security risk is less tangible and predictable than that ofphysical hazards, they are hitherto given minimal attention within standardsand research on risk acceptance criteria.

Page 33: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

2.7 Risk assessment 19

2.7 Risk assessment

Amongst the strengths of the risk definition of Kaplan & Garrick (1981), isits direct relevance to risk assessment. Figure 2.4 is adopted from the Nor-wegian standard NS 5814 (2008), illustrating the process of risk assessmentwithin a broader framework of risk management. According to the standard,’risk assessment’ is a complete process covering the planning of- and execu-tion of risk analysis and risk evaluation. A slightly different conceptualiza-tion is found in NORSOK Z-013N (2001), while HSE (2001b) and US Presi-dential/Congressional Commission on Risk Assessment and Risk Management(1997) see risk assessment as the sole process of identifying and estimatingthe likeliness of adverse outcomes. The latter understands risk management asthe process of analyzing, selecting, implementing, and evaluating actions toreduce risk, which is in accordance with the general conception of NS 5814. Inthis report, the term ’risk analysis’ is reserved for the process of answering thethree questions of Kaplan and Garrick, while ’risk assessment’ is understoodas a broader practice according to NS 5814. Due to the explanatory power ofFigure 2.4, only one element is discussed in detail, namely the principal stageof risk evaluation forming the underlying objective of this report. NS 5814(2008, p.6) defines risk evaluation as:

The process of comparing described or calculated risk with specifiedrisk acceptance criteria.

Risk evaluation involves a decision on whether the risk is acceptable, followedby an assessment of the need for and feasibility of risk reduction measures.

The output of risk analysis forms the input to the risk evaluation process,as illustrated in Figure 2.4. This may be the expected number of fatalities orthe fatal accident rate associated with a specific solution. Although Kaplan &Garrick (1981) urge caution in the use of single number presentations of risk,the calculation of aggregated values is implied in NORSOK Z-013N (2001).Most of all is this a practical necessity, enabling comparison and prioritizingof options against a set of risk acceptance criteria.

Figure 2.4 shows that risk acceptance criteria shall be set previous to therisk analysis, as part of the broader process of risk assessment. Unfortunately,NS 5814 provides no guidance on the establishment of such criteria. In the Nor-wegian offshore industry, the responsibility of defining risk acceptance criterialies within the operator, in contrast to current practices in the Netherlands andthe UK. Risk acceptance criteria are central also within the British Health andSafety Executive (HSE), although chapter 4.4 demonstrates that the practicesare clearly distinctive. According to HSE (2001b), successful risk evaluationbe large depends on the formulation of risk acceptance criteria. But what is arisk acceptance criterion? And, far more intricate; what is acceptable risk?

Page 34: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

20 2 Clarification of concepts

Inititating analysis, problem andobjectives formulation

Define framework

Establish RAC

Organisation of work

Choice of methods and data sources

Establish system description

Identify hazards and unwanted events

Analysis of causesand probability

Analysis of consequences

Risk description

Comparison withRis acceptance criteria

Identification of riskreducing measuresand their effect

Documentation andconclusions

Risk treatment

Risk assessment

Risk analysis

Risk evaluation

Figure 2.4. Risk analysis process description according to NS 5814 (2008)

Page 35: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

2.8 Risk acceptance criteria 21

2.8 Risk acceptance criteria

A criterion can be understood as (Webster, 1978):A standard of judging; any established law, rule, principle, or fact bywhich a correct judgment may be formed.

In the context of accident risk, a criterion may be a term of reference, NR, bywhich the theoretical risk, R, is compared against as illustrated in Figure 2.5.For comparison, R and NR must necessarily be expressed by the same risk metric.Principally, NR can be any type of reference, like a company’s past performance,the best industrial benchmark or legal requirements. R may be calculated inadvance or retrospectively of a period; managing future risk or assessing pastperformance. In most standards the term ’risk acceptance criteria’ is specified,implying that comparison is a matter of acceptance or rejection of future risk.NS 5814 (2008, p.6) defines risk acceptance criteria as

Criteria used as basis for decisions about acceptable risk.This is complimentary to the definition of NORSOK Z-013N (2001, p.7):

Criteria that is used to express a risk level that is considered tolerablefor the activity in question.

Since ’tolerable’ is not tantamount to ’acceptable’, the definition of NS 5814(2008) is the most adequate for our purpose.

System

Accepted

R= f{Pr,C}

R<R?

Reduce Pr or C

Figure 2.5. Principal illustration of risk acceptance criteria (adapted from Breugel (1998))

Risk acceptance criteria need not be quantitative (NS 5814, 2008). Accord-ing to NSW (2008), it is essential that certain qualitative principles are adopted,

Page 36: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

22 2 Clarification of concepts

irrespective of the numerical value of quantitative acceptance criteria. Commonqualitative criteria are:

� All avoidable risks shall be avoided

� Risks shall be reduced wherever practicable

� The effects of events shall be contained within the site boundary

� Further development shall not pose any incremental risk

2.9 Acceptable risk

NS 5814 (2008, p.6) defines acceptable risk as:

Risk which is accepted in a given context based on the current valuesof society and in the enterprise.

Those who seek a generally agreed definition of acceptable risk are, however,likely to be disappointed.

2.9.1 One accepts options, not risks

The mighty quintet of Fischhoff et al. (1981) ended their quest, concludingthat ’acceptable risk’ is an expression of wrongful connotations that shouldneither be defined, nor used in isolation. Rather, it shall be employed as anadjective, describing a specific kind of decision problem denoted ’acceptable-risk problems’. With this commandment, Fischhoff et al. (1981, p.3) underlinethat risk acceptability is inherently contingent on time and situations, and ishence never absolute, nor universal:

The act of adopting an option does not in and of itself mean that itsattendant risk is acceptable in any absolute sense. Strictly speaking,one does not accept risks. One accepts options that entail some levelof risk among their consequences.

The acceptability of an option represents a trade-off, between the full set ofassociated risks, costs and benefits of an option. In turn, the desirability of thesefactors depend on the other options, values and facts examined in the decisionmaking process. Owing to this, the most acceptable option in an acceptable-risk problem may not be the one with the least risk. This is why Fischhoff etal. find interpretations like that of Kaplan & Garrick (1981) misleading, whosee acceptable risk as the risk associated with the option offering the mostoptimal mix of risk, costs and benefits. Still, Kaplan and Garrick contributewith a to-the-point formulation, namely that no risk is acceptable in isolation.Consequently, the discouraging conclusion of both stances is that no level ofrisk can specified to mark the line between acceptable and unacceptable risk.

Page 37: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

2.9 Acceptable risk 23

2.9.2 Acceptability is not tantamount to tolerability

The difficulty of defining acceptable risk is not only due to the inherent con-tingencies of acceptable-risk problems. Another challenge is the disparate ter-minology across countries and sectors. Of exceptional position is the UK, dis-tinguishing between ’tolerable’ and ’acceptable’. The distinction is stressed inHSE’s report on tolerability of risk from nuclear stations (HSE, 1992, p.2):

’Tolerability’ does not mean ’acceptability’. It refers to a willingness tolive with a risk so as to secure certain benefits and in the confidencethat it is being properly controlled. (..) For a risk to be ’acceptable’ onthe other hand means that for purposes of life or work, we are preparedto take it pretty well as it is.

In Norway and the Netherlands, France and Germany, this distinction is notclearly drawn, possibly resulting in inconsistent terminology as in NORSOKZ-013N (2001). To avoid confusion, ’tolerability’ is in this report reserved fordiscussions on the ALARP-principle of HSE. Beyond that, ’risk acceptance’ isapplied, since it is the most consistently used term in standards and literatureon the subject. Quite paradoxically, the meaning of risk acceptance is properlysought in HSE’s definition of tolerability. The notion of acceptable-risk prob-lems are used when applicable, although not following the advice of Fischhoffet al. (1981) of banning the substantival notion ’acceptable risk’. This is sim-ply because in the pursuit of a sound formulation of risk acceptance criteria,some faith is needed in that it makes sense to define an unacceptable level ofcomparable risk, as defended by NSW (2008) and HSE (2001b).

2.9.3 Factors influencing risk acceptance

From the definitions of HSE (1992) and Fischhoff et al. (1981), it can bededuced that our willingness to accept risk depends on the benefits from takingit, the extent it can be controlled (personally or institutionally) and the typesof consequences that may follow. In his groundbreaking article ’social benefitversus technological risk’, Starr (1969) observes that people’s willingness toaccept risk is substantially greater for voluntary activities than for involuntarysuch. Typical voluntary risks are those of skiing or smoking cigarettes, overwhich the individual also exercise personal control. On the contrary is livingin the vicinity of a new-build nuclear plant, associated with both external locusof control and involuntariness. Although dependent on the benefits generatedfrom the plant, the extent the plant owner controls the risk and so on, the riskis less likely judged acceptable. This points to a later observation of Starr &Whipple (1980), in that benefits and voluntariness are relative to the evaluator.The notions of personal and societal acceptable risk are introduced, implyingthat one shall always ask the question of acceptable risk to whom. Related are

Page 38: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

24 2 Clarification of concepts

the terms of societal and individual concerns, expressing impacts on society andthings individuals value personally. While the government sees the benefits ofelectricity generation and employment, these may for neighbors of the plant beminor to the risk of nuclear accidents. Balancing individual and governmentalconcerns is therefore fundamental when developing national policies on riskacceptance (NSW, 2008).

2.9.4 Risk acceptance is a social phenomenon

Douglas (1985) asserts that equity and personal freedom are moral determi-nants of risk acceptability. If risks, benefits and control measures are unevenlydistributed, risk acceptance is likely low. Her arguments are rooted in the be-lief that risk acceptance constitutes the often neglected social dimension ofrisk perception. Although risk acceptance is related to the factors influencingindividual perception of risk, it is by and large culturally determined throughsocial rationality and institutional trust:

The question of acceptable standards of risk is part of the question ofacceptable standards of living and acceptable standards of morality anddecency, and there is no way of talking seriously about the risk aspectswhile evading the task of analyzing the cultural system in which theother standards are formed (Douglas, 1985, p.82).

The former Soviet Union serves as an example. The totalitarian government’sconcealing of the Chernobyl accident amplified institutional distrust, whichin turn lowered public acceptance of nuclear risk (Breakwell, 2007). A lessextreme manifestation is the differently assumed risk aversion in the UK andthe Netherlands, which is discussed in section 3.4. Risk aversion, understoodas disproportional repugnance towards multiple fatality events, is central torisk acceptance (Ball & Floyd, 1998). The contrast between traffic accidentsand a core meltdown serves as an illustration; being risk averse implies that agiven number of traffic fatalities distributed over many accidents, is acceptedover an equal number of deaths in one nuclear accident. Risk aversion andcultural preferences vary within regions or cities, as well as between countries(Nordland, 2001).

Closing this presentation, one can conclude that risk acceptance is a complexissue, going beyond the estimation or physical magnitude of risk. As such,decisions on risk are fraught with difficulties of rational, moral and politicalcharacter. Fischhoff et al. (1981) address five complexities of acceptable-riskproblems:

� Uncertainty about problem definition: Is the decision explicit? What is thehazard, the consequences and the possible outcomes? The outcome of adecision may already be determined by the ground rules.

Page 39: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

2.9 Acceptable risk 25

� Difficulties in assessing the facts: How are low probabilities assessed andexpert judgment applied? The treatment of factual uncertainties may preju-dice the conclusion.

� Difficulties in assessing the values: How are labile values confronted orinferred? If asked to express their opinion, uncertainties are introduced aspeople may not be aware of their values.

� Uncertainty about the human element: What are the accuracy of laypeople’sperceptions, the fallibility of experts and the rationality of decision makers?When assumptions about the behavior of experts, laypeople and decisionmakers are unrecognized, they can lead to bad decisions and distort thepolitical process.

� Difficulties in assessing decision quality: How much confidence do we havein the decision making process? An approach to acceptable risk decisionsmust be able to assess its own limits.

The five complexities offer valuable insights practitioners should have in mindwhen making decisions about risk. Evaluating risk by a predefined set of ac-ceptance criteria pivots on the contradiction of seeking a rational and objectivedecision criterion, in a matter that is utterly contextually contingent. When for-mulating risk acceptance criteria, it is thus paramount to recognize that theirpurpose is to aid practical decision making on risk. This calls for a carefulexamination of how criteria are expressed and the manner in which they areset, which make the topics of the following two chapters.

Page 40: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk
Page 41: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

3

Expressing risk acceptance criteria

3.1 Introduction

For risk acceptance criteria to be operational, a means for describing risk lev-els is required. The importance of choosing an adequate expression of riskacceptance is stressed by Holden (1984), warning that improper metrics pro-duce anomalous conclusions. Flipping the coin, harmonization of well-chosendecision parameters facilitates a consistent, systematic and scientific decisionmaking process, as advocated by Skjong et al. (2007).

The fundamentals of commonly used metrics are discussed in this chapter.First, the concepts of risk metrics, individual and societal risk are introduced,followed by a briefing on aspects to consider when choosing metrics to theexpression of risk acceptance criteria. Central metrics are presented thereafter,with emphasis on underlying assumptions, areas of application, strengths andfallacies.

3.1.1 Risk metrics

Risk can be expressed in multiple ways, relating to the spectrum of conse-quences and the format of presentation. Consulting NORSOK Z-013N (2001),risk criteria range from qualitative matrices and wordings to quantitative met-rics. Due to their practical prominence, the latter make the focus of this report.Baker et al. (2007, Appendix H-1) define risk metric as:

A key performance indicator used by managers to track safety perfor-mance; to compare or benchmark safety performance against the perfor-mance of other companies or facilities; and to set goals for continuousimprovement of safety performance.

The notion ’key performance indicator’(KPI) indicates that risk metrics de-scribe safety performance, which one is able to measure after a period has

Page 42: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

28 3 Expressing risk acceptance criteria

passed. A risk metric is thus a measurable quantity, describing a consequenceto the right in the bowtie-model of Figure 2.3. The difference between quanti-tative and qualitative criteria is difficult to draw. Is zero-tolerance of fatalitiesa quantitative metric, or barely a qualitative vision? And does the groupingof consequences in a risk matrix yield qualitative criteria? Remembering thewords of Campbell (2005), NORSOK Z-013N (2001) seemingly refers to cri-teria derived from qualitative analyses rather than qualitative criteria as such.

The Norwegian Petroleum Safety Authority (PSA, 2001) requires risk ac-ceptance criteria for personnel safety and third party risk from major accidents.Focal to this report is therefore risk metrics considering fatalities as the conse-quential endpoint. For a discussion on occupational accident risk criteria, thereader may consult Kjellén & Sklet (1995). It should be noted that PSA alsoprescribes the use of environmental risk criteria. Due to the characteristic na-ture of risk from pollution, environmental risk metrics are excluded from thisstudy.

3.1.2 To whom it may concern

There are broadly two ways of expressing risk to persons HSE (1992, p.15);

Individual risk: The risk to any particular individual, either a workeror a member of the public. A member of the public can be definedeither as anybody living at a defined radius from an establishment, orsomebody following a particular pattern of life.

Societal risk: The risk to society as a whole, as represented for ex-ample, by the chance of a large accident causing a defined number ofdeaths or injuries. More broadly, societal risk can be represented as a’detriment’, viz the product of the total amount of damage caused bya major accident and the probability of this happening during somespecified period of time.

The distinction between the notions and the call for considering both, is bestexplained by an illustrative example. Ball & Floyd (1998) pinpoint that at aparticular point along a route for transport of dangerous goods, the individualrisk may be very low. However, the chances of an accident somewhere alongthe route may be significant, posing great aggregate risk to society. Conversely,in cases where the societal risk is low, e.g. on a scarcely manned installation,there still might be individuals experiencing undue levels of risk. Both indi-vidual and societal risk metrics are therefore necessary to provide an adequatedescription of the risk posed by a particular system. According to HSE (1992),it is furthermore essential to be clear as to whom a figure of risk applies.For instance, it is meaningless to calculate a national average risk of beingkilled while skydiving. Therefore, in order of descending voluntariness and

Page 43: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

3.2 Aspects in the choice of risk metrics 29

involvement, notions of employees, third and fourth parties are key meta datato individual risk in particular (Pasman & Vrijling, 2003). Similarly are resi-dential, sensitive and transient populations relevant constructs when putting afigure to societal risk (HSE, 2009). It should be noted that individual and soci-etal risk are distinct from, and only partly related to, the notions of individuallyand socially acceptable levels of risk introduced by Starr & Whipple (1980).

3.2 Aspects in the choice of risk metrics

3.2.1 Generic requirements according to NORSOK Z-013

In the annex of NORSOK Z-013N (2001), generic guidelines for choosingadequate risk criteria are provided. These are relevant for all industries. Perhapsdisappointingly, the guidelines concern the qualities of risk metrics as such,rather than how quantities are to be established. On the positive side, they implya conscious approach to the limitations and communication of risk metrics(Vinnem, 2007). According to the standard, a risk metric should best as possiblesatisfy the following four qualities:

� Be suitable for decision support. The most important property of risk ac-ceptance criteria is their ability to provide input to decisions regardingrisk reducing measures. They have to express the effect of such measures,preferably in a precise manner, and associated with particular features ofthe activity in question.

� Be suitable for communication. Risk acceptance criteria and results fromrisk analysis shall be easy to understand and interpret for non-experts, suchas operational management or the public at large. Criteria expressing asocietal dimension often fulfill this requirement, as comparison with otheractivities in society is enabled. However, one must be aware that criteriathat appear easy to understand may represent an oversimplification if thedecision problem is very complicated.

� Be unambiguous in their formulation. This implies a high level of precision,explicit system limits defining what situation or areas the criteria are validfor, and a conscious approach to averaging of risk over time, areas andgroups of people. Returning to PSA (2001), methods for averaging riskshall be used to ensure that the acceptance criteria for the personnel as awhole and for exposed groups of personnel compliment each other

� Be concept independent. The criteria shall not favor any particular conceptsolution, explicitly, nor implicitly through the way risk is expressed.

NORSOK Z-013N (2001) emphasizes that risk metrics are fraught with varyingdegrees of uncertainty, depending on the consequential endpoint and the level

Page 44: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

30 3 Expressing risk acceptance criteria

of precision. Since uncertainty increases with required level of details, uncer-tainty considerations might contradict a criterion’s score on the requirementof suitability for decision making. The standard presupposes that risk metricsreflect the approach to risk analysis and are consistent with previous use withinthe company.

3.2.2 Pragmatic considerations

Still consulting NORSOK Z-013N (2001), one reads that the intended useand decision context shall be considered when choosing risk acceptance crite-ria. Pragmatic considerations relating to life cycle phase, systems or activities,strongly influence the feasibility of risk metrics. Whether the acceptance criteriafacilitate decision making on risk reducing measures or enable comparison ofoverall risk levels, their applicability change across situations. By example, thedifferent contexts of deciding on detailed design solutions in the engineeringphase, and broadly comparing two alternative field developments in an earlyconcept study phase, differently constrain the analysis and evaluation of risk.

Leaving the realm of the offshore industry, it is tempting to generalizefactors of pragmatic importance. Rather than focusing on the practical use ofrisk acceptance criteria, HSE (1992) is concerned with the subjects of interest;the hazard and those at harm. According to HSE, criteria shall be chosen basedon characterization of the hazard in question, the nature of harm (whetherfatalities are prompt or delayed) and characteristics of the populations at risk.Holden (1984) similarly calls for an adequate description of the particular riskpatterns. Such a description may be simple or complex, but shall capture boththe totality and distribution of risk. The observant reader may notice that thetotality of risk prescribes the use of societal risk metrics, whilst the distributionof risk is best captured by individual risk metrics.

3.2.3 Past and future observations

Risk metrics are often derived from historical data, based on averages of previ-ous periods and assuming constant trends in the future (Vinnem, 2007). Such anapproach is justifiable when the purpose is to monitor trends in risk levels, butrun into philosophical difficulties when projecting future levels of acceptablerisk. In the literature, it is seldom specified whether one is to use the predictednumber of occurrences (a parameter), or the historically measured number ofoccurrences (an estimate) in the expression of risk acceptance criteria. Thisis problematic, because the two quantities rely on different assumptions re-garding future exposure and contextual premises. Remembering the fallaciesof frequency-based approaches, both risk levels and associated benefits aredestined to change along with the population at risk. Consequently, it is ques-tionable whether the past is a legitimate predictor of future risks and their

Page 45: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

3.3 Individual risk metrics 31

acceptability. The latter issue is addressed by Fischhoff et al. (1981), discussedunder the topic of bootstrapping in chapter 4.3.

3.3 Individual risk metrics

Individual risk metrics express the probability that an individual gets killed orinjured per some appropriate measure of exposure, for instance year, km trav-eled or work hours. If the severity of consequences is high, fatalities are oftenconsidered at the expense of risk to health and injuries (Skjong et al., 2007).According to Marszal (2001), individual risk metrics are the most commonmeasure of risk, providing useful information for purposes of facility sitingand regulatory oversight. Individual risk metrics may also answer an individualwondering what the risk is to him and his family (HSE, 1992). Since assess-ing risk to an actual individual (fully taking account of the circumstances inwhich exposure arise) is a cumbersome task, the average risk of one or morehypothetical persons is usually calculated. This is an assumed individual withsome fixed relation to the hazard, to which actual persons can compare theirown patterns of exposure. Not only does such an approach allow risk to bemeaningfully assessed independently of the people actually exposed, one canalso take into account that exposure is seldom uniform (HSE, 1992). Althoughindividual risk metrics are favored by many, they possess fundamental weak-nesses. Most conspicuous is the inherent limitation that they do not addressthe whole risk picture, that is, the totality of risk as expressed by societal riskmetrics (Evans & Verlander, 1997; Holden, 1984; HSE, 2009).

The main expressions of individual risk are individual risk per annum(IRPA) and localized individual risk per annum (LIRA). Sometimes the abbre-viation IR (individual risk) is used, without explicitly defining the type underconsideration. As the two notions rest on distinct assumptions and convey dif-ferent meanings, there is a call for clarifying their distinctiveness and how bothcan be meaningfully utilized.

3.3.1 Individual risk per annum- IRPA

NORSOK Z-013N (2001, p.44) defines individual risk as:

IR is the probability that a specific individual (for example the mostexposed individual in the population) should suffer a fatal accidentduring the period over which the averaging is carried out (usually a 12month period).

When considering a period of 12 months, the metric is denoted individual riskper annum (IRPA). Since calculations usually are carried out for a hypotheti-cal person rather than a specific individual, IRPA is sometimes referred to asaverage individual risk (AIR).

Page 46: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

32 3 Expressing risk acceptance criteria

The risk of performing an activity

IRPA expresses the risk you as an individual bear when performing an activity,whether it is crossing the street or the superior task of living. The numeric valueof IRPA is derived from a range of assumptions, like age, sex, workplace- orleisure activities. For e.g. a worker, the total IRPA is calculated by summing theIRPA for each work-related activity, an (Rausand & Utne, 2009). A conceptualformula for IRPA is:

IRPAa D accident frequency � Pr .performing a/ � Pr .dies j performing a/

(3.1)The formula conveys that it is not sufficient to consider the frequency of acci-dents, since exposure and susceptibility are decisive factors. IRPA is thereforesuited for expressing risk to particularly exposed individuals or groups, likeworkers or users of a product or service (Skjong et al., 2007). Because theeffects of operational changes or risk reducing measures can be explicitly re-flected in IRPA, the criterion aids decision making on actions affecting thesafety of individuals (NORSOK Z-013N, 2001). In the UK, IRPA is utilizedwhen expressing acceptance of first and third party risk related to plant activ-ities. Typical quantities are recently reviewed by HSE (2009), advising IRPAto be less than 1 in 1000 or 10�3 for workers in the UK nuclear power sector.Moving across the North Sea, Vinnem (2007) reports a similar criterion of 10�3

by some Norwegian oil and gas offshore operators, while commenting that thisis a very lax limit. By comparison, IRPA of death by lightning is recognizedto be 10�6 (HSE, 1992).

Averaging over people, exposure and consequences

NORSOK Z-013N (2001) evaluates IRPA as relatively simple to understand anduse in comparison for non-experts, and less concept dependent than commonsocietal indicators. The former claim is contested by consulting the literatureon risk perception, revealing difficulties of laypeople in grasping risk expressedby small probabilities (Slovic, 1987). A more critical weakness stressed byNORSOK Z-013N (2001), is the ambiguities following the difficulty of definingprecise exposure. There is relatively high uncertainty tied to such calculations,also because the entire accident sequence needs to be quantified. The latterconcern, however, numeric values resulting from quantitative risk analyses andnot simple frequency-based approximations. Still, this does not leave frequencybased approaches free of charge. Reproducing the objections of Holden (1984),IRPA can be mathematically reduced by increasing the number of people overwhich risk is averaged. The individual risk appears lowered, although morepeople are exposed to a level of risk that remains as before. Another concern

Page 47: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

3.3 Individual risk metrics 33

is whether it is ethically sound to set criteria averaged over differently exposedpersons or periods, which is further discussed in section 5.2.

Not only does IRPA apply average figures over people and periods, aver-aging is also performed over spectra of consequences. Comparison with anaveraged number containing multiple fatalities will according to Holden facili-tate anomalous conclusions, since individual risk statistics mostly are made upby many single fatality accidents. Although three decades have passed sinceHolden voiced his concerns, present time researchers are still occupied withthis fundamental deficiency. Amongst them are Jongejan et al. (2009), wor-rying that individual criteria cannot prevent single accidents killing a largenumber of people.

3.3.2 Localized individual risk-LIRA

The localized individual risk (LIRA) can be defined as (Jongejan, 2008, p.3):

The annual probability that an unprotected, permanently present indi-vidual dies due to an accident at a hazardous site.

In contrast to IRPA, which is dependent on the characteristics of the actual orhypothetical individuals in question, LIRA is a property of the location.

LIRA is a property of the location

LIRA may be rightfully referred to as geographic rather than individual risk(Marszal, 2001). Rausand & Utne (2009) find strong assumptions underlyingthis metric, notably that a hypothetical person is residing on a particular lo-cation 24 hours a day, throughout a whole year. LIRA considers only majoraccident risk in the vicinity of one or more hazardous installations, ignoring themultiplicity of other risks faced by an individual. Nevertheless, such simplify-ing assumptions do not leave LIRA simple to calculate. Figure 3.1 illustratesthat complex factors must be accounted for, like wind directions, topographyand dose response-relationships. Sophisticated computer tools are developedfor this purpose, for instance the software program PHAST launched by DNV(2008).

Land use planning

Since a person is always assumed present, LIRA does not change even if noone is at the spot when an accident occurs (Bottelberghs, 2000). Due to itslocation specific properties, LIRA is almost exclusively used in land use plan-ning, regarding siting of hazardous plants in residential areas and vice versa.Helpful is in this regard the use of iso-risk contour maps, displaying lines thatconnect locations with the same value of LIRA, as illustrated in 3.2 (Pasman

Page 48: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

34 3 Expressing risk acceptance criteria

Dispersion- Topography- Weather- Wind directions- Liquid droplet thermodynamics Effect assessment

- Flammable - Explosive- Toxic

Initial discharge-Process failures-Material properties-Storage and process conditions

Figure 3.1. Calculating LIRA

& Vrijling, 2003). Common practice in the UK and the Netherlands is the useof safety distances, prohibiting accommodation of vulnerable objects withincertain contours. Typically, zones for residential housing are set for iso-riskcontours with LIRA lower than 10�6 (Bottelberghs, 2000). In the UK, LIRAis coupled with the concept of dangerous dose, advising against homes beingbuilt if the probability of receiving a chemical dose leading to severe distressis greater than 10�5 per year (HSE, 1992). Since dangerous dose is an intricateconcept stemming from the discipline of toxicology rather than that of riskanalysis, it is not further examined.

Evaluating LIRA

As NORSOK Z-013N (2001) seemingly treats IRPA and LIRA under the samenotion of IR, the strengths and weaknesses of LIRA may be assumed similar tothose of IRPA. Such a conclusion should not be drawn without reconsideringthe distinguishing features of LIRA in light of the NORSOK Z-013N (2001)requirements. Being a localized risk metric, it scores relatively high on theaspect of precision in decision making support, as it by definition is concernedwith particular areas of an installation or site. LIRA may be unambiguouslydefined with clear system limits, due to the stringent assumptions and its in-herent focus on physical boundary limits. Whether ambiguities are introducedthrough averaging, is pragmatically conditioned on the particular risk, the areaand its inhabitants. The unrealistic assumption of a person spending a wholeyear constantly at a particular point, yield little confidence in the risk level

Page 49: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

3.4 Societal risk metrics 35

Figure 3.2. Risk contours on local map

a person actually face. Nevertheless, the metric is relatively easy to grasp fornon-experts, owing to the simplified assumptions and the visual aid of iso-riskcontours on maps.

3.4 Societal risk metrics

Societal risk is a fuzzy concept. Reviewing the evolution of societal risk crite-ria, Ball & Floyd (1998) begin by acknowledging that there exist none overlyprescriptive definition of the term. Rather, its precise meaning seems rootedin an individual’s professional background. Seen through the concertizing eyesof an engineer, societal risk is simply the relationship between accident fre-quency and the number of people suffering harm. Social scientists tend to favora broader view, for instance by incorporating socio-political responses. A diffi-culty with societal risk is hence the term itself. For clarification, Ball and Floydsuggests three categories of societal risk:

� Collective risks, covering non-accidental exposure to harmful materials� Societal risks, concerning single accidents with potential of causing multiple

fatalities� Societal concerns, associated with the overall impacts of particular tech-

nologies

Societal risk is a subset of societal concerns (HSE, 2001b). However, societalconcerns may also be triggered by accidents with one or none fatalities, de-

Page 50: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

36 3 Expressing risk acceptance criteria

pendent on the characteristics of the event and the technology in question. Atypical example is the Three Mile Island-incident of 1979. Although no peoplewere harmed in the event, public skepticism against nuclear power, divergentexpert opinions and unsuccessful risk communication nourished major outcry(Breakwell, 2007).

An alternative notion is group risk, describing the risk to a group of persons,for example workers or travelers (Rausand & Utne, 2009). This notion is in linewith Ball and Floyd’s second interpretation of societal risk, which is employedin the following.

According to Marszal (2001), societal risk criteria are well suited for reg-ulatory approval and high-level management oversight of process plants. Aspointed out by Skjong et al. (2007), they help ensuring that risk imposed onsociety from large technological projects are adequately controlled. However,following the reasoning of Ball & Floyd (1998), the usefulness of societal riskmetrics is not universally accepted, due to difficulties in defining acceptable lev-els and methodological problems in generating necessary data. In a consultancyreport provided by HSE (2009), it is stated that although societal risk is not anovel concept, its explicit incorporation into decisions on land use planning andon site safety measures in the UK is new. There are still issues to be dealt with,like incremental build up of populations and redistribution of costs resultingfrom the altered balancing point between safety and development. One mightalso argue that an individual is rather concerned with individual risk (Ball &Floyd, 1998). Taking the viewpoint of an individual, she has probably none butmorbid interests in the number of people dying with her in an accident. Still,there are few disputes over the principal argument of HSE (1992), in that froma societal point of view, decisions should be based on the totality of risk borneby the society as whole.

Commonly used societal metrics are FN-curves, fatal accident rate (FAR)and potential loss of life (PLL). Due to above-mentioned disparities of societalrisk interpretations, the various metrics convey different meanings not neces-sarily consistent with those of Ball & Floyd (1998). What is more, they areall fraught with assumptions revealing unforeseen implications when used inpractical decision making. A possible explanation is found in the TOR-reportof HSE (1992), admitting that the calculation of societal risk is a complexprocess. Individuals extend over different generations, are geographically dis-persed and accidental releases come in a wide range of magnitudes. Althoughthe values of individual and societal risk are linked, their precise relationshipdepends on numerous factors. Utmost important is the number of people at risk,accompanied by hazard characteristics and different fatality probabilities acrossactivities or locations (Ball & Floyd, 1998). As in the calculation of individualmetrics, societal risk metrics apply average figures since these factors vary withtime.

Page 51: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

3.4 Societal risk metrics 37

3.4.1 FN-curves

An FN-curve is basically a plot showing the frequency of events killing N ormore people, as shown in Figure 3.3. It can be used for presenting accidentstatistics and risk predictions, as well as drawing criterion lines for acceptablelevels of societal risk. Mathematically, it is derived from the commonly usedexpression of risk being a product of frequency and consequence, denoted num-ber of fatalities per year. Perhaps not instinctively, the fatalities are not integerssince they are probabilistically generated (Ball & Floyd, 1998). Graphically, thecurve is presented by taking the double logarithm of this expression, due to thewide range of possible values of high consequence/low probability risk (Evans& Verlander, 1997).

There is a distinction between FN- and fN-curves that should be clarified.While the former express the cumulative frequency of N or more fatalities,an fN-curve represents the frequency of accidents having exactly N fatalities.As fN-curves are not very informative and rarely used for expressing risk ac-ceptance criteria (HSE, 2009), its relationship with FN-curves is not furtherexamined. For a thorough discussion on the subject, the reader may consultEvans & Verlander (1997). Due to their numerical distinction and since NOR-SOK Z-013N (2001) uses the notion of fN when obviously speaking of cumu-lative probabilities, there is a call for standardization of terminology to preventerroneous criteria being drawn.

Risk aversion in FN-criterion lines

Formulating risk acceptance criteria, a factor ˛ is introduced to express riskaversion:

R D F � N ˛ (3.2)

Taking the log-log of the expression yields:

log R D log F C ˛ log N (3.3)

˛ constitutes the slope of the criterion line, as illustrated in Figure 3.4. Addi-tionally, an anchor point (a fixed pair of consequence and frequency) is neededto describe the crossing of the y-axis (Skjong et al., 2007). The literature reviewof Ball & Floyd (1998) proves that deciding on ˛ is a disputed task. In the UK,HSE prescribes a neutral factor of -1, in contrast to the Dutch government’sfavoring of a risk aversive factor of -2. The rationale is that people are believedto be more than proportionally affected by the number of fatalities, leaving theacceptable frequency of an accident killing 100 people 10 times lower than onekilling 10 people. Compressing a complicated discussion, a neutral approachis preferable, as one otherwise introduces hidden weighting factors making the

Page 52: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

38 3 Expressing risk acceptance criteria

Number of fatalities, N

Acc

iden

ts p

er y

ear w

ith N

or m

ore

fata

litie

s

Figure 3.3. FN-curve for road, rail and air-transport 1697-2002 (adopted from HSE (2003b))

decision making process opaque. This is mainly because the greater aversionfactor, the stricter the criterion and hence regulation will be, ex ante out rul-ing the potential benefits of a proposal. Approaching the problem differently,Linnerooth-Bayer (1993) sees the great problem not in which aversion factorto use, but how the public’s aversive concerns are addressed.

What is wrong with FN-criterion lines?

A numerical example of an FN criterion is provided in HSE (2001b), combiningthe slope of -1 with a fixed tolerability point of a yearly frequency of 0.0002,per single accidents killing more than 50 people. The FN-curve is applauded asa helpful tool if there are societal concerns for multiple fatalities occurring inone single event. The technique is also judged useful for comparing man-madeaccident profiles with natural disaster risks deemed tolerable by society. Thisclaim is contested in HSE (2009) and Skjong et al. (2007). In summary, theirobjections concern the lacking ability of FN-curves to allow meaningful com-parison, telling nothing about the relative exposure and hazard characteristics

Page 53: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

3.4 Societal risk metrics 39

α = -1

α = -2

N

F

Figure 3.4. Risk aversion in FN-criterion lines

of different sectors. Nevertheless, the band of researchers agree that FN-criteriaare an informative means of complimenting individual risk metrics. Evaluatingthe technique in light of their most important requirement, NORSOK Z-013N(2001) argue that it might confuse rather than aid decision makers if the limit isexceeded in one area but otherwise below, as illustrated in Figure 3.5. Consult-ing the contemporary work of HSE (2009), this deficiency is left unansweredstill. Because calculating full FN-curves is a resource intensive task requiringin-depth mathematical analysis of all potential major accident scenarios, HSEhas recently launched more sophisticated methods for efficiently aggregatingsocietal risk, named quickFN and SRI.

N

FRisk acceptance criterion line

Calculated risk

Figure 3.5. Case where the predicted risk exceeds the FN-criterion line in one area, whileotherwise below

Amongst the most cited antagonists of FN-criterion lines are Evans & Ver-lander (1997), raising two main objections. First, the criterion is accused forprescribing unreasonable decisions, as a result of concentrating on just one ex-treme feature of a statistical distribution. Secondly, they sentence the techniquefor being illogical in a decision theoretical sense, providing inconsistent recom-mendations if an identical risk picture is presented in different ways. Herebydiscarding the use of FN-curves for deciding on acceptable risk, the authors

Page 54: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

40 3 Expressing risk acceptance criteria

alternatively suggest the method of expected disutility, under which PLL is asimple variant.

3.4.2 Potential loss of life- PLL

NORSOK Z-013N (2001, p.41) defines potential loss of life (PLL) as:

The PLL value is the statistically expected number of fatalities withina specified population during a specified period of time.

By example, the average number of Norwegian persons killed in road acci-dents during the last five years was 242 (SSB, 2009), providing an estimateof PLL within the same population in 2009. Similarly, Table 3.1 illustratesPLL-estimations of selected Norwegian industries.

Conceptual links between PLL and IRPA, LIRA and FN-curves

PLL can be computed by summing the products of all fN-pairs in a non-cumulative fN-curve (Pasman & Vrijling, 2003):

PLL DX

fN (3.4)

Idealistically assuming that all n people in a specified population are exposedto the same individual risk, there is also a link between the values of PLL andIRPA of a certain activity (Rausand & Utne, 2009):

PLL DX

n � IRPAn (3.5)

Combining population density with iso-risk contours, PLL provides valuableinformation above that of single LIRA-based metrics. After all, what is thepoint in reducing local risk if no people are ever present?

Industry Number of fatalitiesAgriculture 9.4Transport and communication 7.2Construction 6.4Health and social services 1.6Extraction 1.4

Table 3.1. Estimated PLL of selected industries in Norway, based on average number offatalities in 2004-2008 (Source:Arbeidstilsynet (2009))

Page 55: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

3.4 Societal risk metrics 41

PLL is a summary measure

PLL is sometimes referred to as the expectation value (EV), as it is a riskintegral representing the expected number of fatalities in one overall number(HSE, 2009). Evans & Verlander (1997) label this a measure of disutility,capturing that the expected value increases with the level of harm. Based on theprinciple of minimizing disutility, such metrics are claimed to provide consistentdecisions above that of FN-criterion lines.

Multiplied with the value of a statistical life, PLL is commonly utilized incost benefit analyses. Since PLL allows risk to be expressed on a uniform basis,it is particularly suited for deciding on risk reduction measures in ALARPdemonstrations (Marszal, 2001). However, it is not common practice to setoverall acceptance limits by PLL, as reported in the study of Vinnem (2007)on Norwegian offshore practices. A plausible explanation is that PLL does nottake exposure into account, neither in terms of number of people, nor hoursexposed. This complicates comparison between activities, biasing decisionstowards scarcely manned concepts or activities. Being a summary measure,PLL loses important information about risk to individuals or a small group ofpeople (HSE, 2009). Furthermore, the metric does not differentiate betweenmultiple accidents killing few people, and few accidents killing a multitude ofpeople. PLL is thus incapable of expressing societal risk in the sense of Ball& Floyd (1998).

Both the strengths and weaknesses of PLL owes to the simplified calculationof an absolute level of fatalities. On one hand, the metric is suitable for decisioncontexts and tools requiring an overall risk number, in addition to the advantageof being easy to grasp for non-experts. On the other hand, PLL is not suitedfor averaging differences between different groups of people, or for comparingactivities with differing manning levels or exposure peaks (NORSOK Z-013N,2001).

3.4.3 Fatal accidental rate- FAR

Adopting the definition of Rausand & Utne (2009, p.58), fatal accident rate(FAR) is:

The statistically expected number of fatalities resulting from accidentsper 108 exposed hours

There are different variants of FAR in the offshore industry (NORSOK Z-013N,2001):

� Group-FAR, expressing risk to a group with uniform risk exposure� Area-FAR, mapping risk in a physically bounded area� Overall FAR, averaged over all positions on a specific installation

Page 56: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

42 3 Expressing risk acceptance criteria

Since the main difference concerns how averaging of risk is performed, thevariants are not discussed separately. It is, however, crucial to beware the appliedaveraging in pragmatic evaluations of FAR.

Accounting for exposure

As FAR is expressed per time unit, one cannot add the contribution to FAR fromdifferent activities, unless exposures are assumed equal or weighted relative toeach other (Vinnem, 2007). The element of exposed hours shall also suit thesystem under consideration. By example, FAR is tailored to the civil aviationindustry by specifying the number of fatalities per 100 000 hours of flight(Rausand & Utne, 2009).

Typical FAR-values lie in the area of 1-30, making it fairly easy to graspfor non-experts compared to risk metrics of very low probabilities (Rausand &Utne, 2009). Vinnem (2007) reports offshore criteria of FAR=20 for the mostexposed groups, and FAR=10 for the total installation work force. Requiringa FAR-value of less than 10, basically means that no more than ten fatalitiesare acceptable during the lifework of approximately 1400 persons (Rausand &Utne, 2009). If the exposure time, ti , for each person is known, FAR can bederived from PLL :

FAR D PLLPti

� 108 (3.6)

Meaningful comparison

In contrast to PLL, FAR enables meaningful comparison over different solu-tions by taking exposure into account. Indeed, NORSOK Z-013N (2001) statesthat FAR is the most convenient of all metrics in this matter. Due to theirsituation specific focus and limited averaging, group- and area-FAR are suitedfor decisions on risk reduction. As such, confined FAR metrics may describeboth the totality of risk and distributional issues. However, being conceptuallylinked to PLL and IRPA, FAR does not distinguish between small- and largescale accidents. Puritan followers of HSE (1992) might thus accuse the metricof expressing upscaled individual risk rather than societal such. This problemwas early recognized by Holden (1984), claiming that like most statistic-basedmetrics, FAR essentially expresses average individual risk. For this reason, FARshould not be used in isolation when multiple fatality accidents are possible.

3.5 Other

The focus has hitherto been on metrics considering fatalities as consequentialendpoint. There are several other expressions of risk, to which the reader atleast should have elementary knowledge about.

Page 57: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

3.5 Other 43

3.5.1 Risk matrix

Risk matrix is a graphical presentation of risk, representing possible combina-tions of consequence and frequency categories, as shown in Figure 3.6. The cat-egories can be either quantitatively or qualitatively expressed, and may includeconsequences to personnel, the environment and/or assets (Pasman & Vrijling,2003). Due to the analyst’s freedom in choosing consequence categories, riskmatrices can capture both individual and societal risk. A combination may evenbe possible, expressing the most serious consequences by multiple fatalities andthe lower end of the scale by IRPA.

Very infrequent Infrequent Fairly frequent Frequent Very Frequent

Catastrophic

Very large

Large

Medium

Small

Frequency

Con

sequ

ence

Unacceptable

Acceptable

Reduce risks as low as reasonably practicable

Figure 3.6. Risk matrix (adapted from Rausand & Utne (2009)

Acceptability indications in risk matrices

Although risk matrices, like FN-curves, represent pairs of frequency and conse-quence, there is an important distinction in that risk matrices express probabilitydistributions, not cumulative frequencies (Skjong et al., 2007). The cells in arisk matrix tell nothing about the chance of having a certain number of fa-talities or more. Instead, the severity of risk posed by different combinationsof frequency and consequence are expressed. Different risk levels are usuallyindicated by three colored zones, from the most severe, red-colored cases inthe upper left corner, to the least serious marked with green in the lower rightpart. Although Skjong et al. (2007) see matrix acceptability indications as ahinder to holistic considerations of risk, the zones are commonly used to eval-uate hazardous events. In Figure 3.6, the upper and lower regions representunacceptable and acceptable risk, whilst an intermediate zone demands evalu-ation of further risk reduction (NORSOK Z-013N, 2001). Such an approach is

Page 58: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

44 3 Expressing risk acceptance criteria

often seen with explicit or implicit reference to the ALARP-principle, which isexamined in chapter 4.4.

Conceptual strengths and deficiencies

Risk matrices enable relative ranking of risks, for prioritizing risk reductionmeasures or examining the need for detailed analyses (Woodruff, 2005). Toprioritize between events, a quantitative measure can be assigned in the form ofa risk priority number (RPN), expressing the seriousness of each cell (Rausand& Utne, 2009).

As risk matrices allow risk to be qualitatively expressed, they provide aunique tool when full quantitative analyses are impractical (Pasman & Vrijling,2003). Owing to this, the approach is adequate for formulating acceptancecriteria for temporary phases (Vinnem, 2007). The categories are then broadlydefined, like ’small’ (consequences) and ’frequent’ (occurrences). NORSOKZ-013N (2001) remarks that the coarseness of each category determines thelevel of precision and whether risk reduction is reflected. Broad categorizationyields few uncertainties, as one most certainly will end up in the ’right’ cell.Simple risk matrices are also easy to communicate to non-experts.

Perhaps the greatest limitation of risk matrices, is that the totality of riskis concealed when a risk picture is split into many contributions (Rausand &Utne, 2009). Even if each hazardous event poses an insignificant, green-coloredrisk, the risk from the totality of scenarios may be painted red. Therefore,an evaluation of the overall risk picture should always follow single scenarioassessments. A final concern is that risk matrices often are tailored to a specificstudy, using relative consequence patterns and worst- or best-case assumptions.This calls for careful consideration of the corresponding frequency classes, andawareness of risk matrices’ limited suitability for comparison across activities(NORSOK Z-013N, 2001).

3.5.2 Loss of main safety functions

PSA (2001) requires acceptance criteria to be set for the loss of main safetyfunctions. Consulting NORSOK Z-013N (2001), this refers to the frequency ofaccidental events leading to impairment of main safety functions, e.g. escapeways and control room functions. Ensuring that the platform design does notimply undue levels of risk, loss of main safety functions is a design relatedcriterion suited for decision making on technical measures.

Vinnem (2007) interprets loss of main safety functions as an indirect expres-sion of risk to personnel. According to NORSOK Z-013N (2001), such metricsadvantageously provide less uncertain risk estimates than direct expressions ofpersonnel risk, since the endpoint of calculation lies earlier in the event se-quence. Loss of main safety functions is unsuitable for comparing risk fromother systems, as it is developed for offshore application exclusively.

Page 59: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

3.5 Other 45

3.5.3 Safety integrity level- SIL

Safety integrity level (SIL) describes the amount of risk reduction an electrical/-electronic/programmable electronic system provides (Marszal, 2001). A SIL-criterion, i.e. the required risk reduction, is in Figure 3.7 conceptualized asthe difference between the risk prior to a safety instrumented system (SIS) andthe tolerable level of risk. IEC 61508 (1998, p.31) define safety integrity as:

The probability of a safety related system/SIS satisfactorily performingthe required functions under all stated conditions within a specifiedperiod of time

Safety integrity is split into four discrete levels, from SIL 4 to SIL 1. Thelevels are distinguished by maximum tolerable failure frequency and the rangeof risk reduction required. Each SIL is quantitatively expressed by probabilityof failure on demand (PFD) and a risk reduction factor, derived from 1

PFD.

To claim achievement of a specific SIL, also qualitative requirements must beadhered to (IEC 61508, 1998).

Process risk, case 1

Process risk, case 4

Process risk, case 3

Process risk, case 2

SIL

2

SIL

4

SIL

3

SIL

1

Acceptable risk

Incr

easi

ng ri

sk (i

ncre

asin

g fre

quen

cy)

Req

uire

d ris

k (fr

eque

ncy)

redu

ctio

n

Figure 3.7. Required risk reduction in terms of SIL (adapted from Marszal (2001))

SIL are, like loss of main safety functions, a technical criterion that is suitedfor decisions on technical measures related to safety instrumented systems.According to IEC 61508 (1998), SIL are functional lower level requirementsthat shall comply with overall risk acceptance criteria. For this purpose, a layerof protection analysis (LOPA) is useful, which is a semi-quantitative methodfor determining SIS performance requirements and evaluating the adequacy ofprotection layers (BP, 2006).

Page 60: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

46 3 Expressing risk acceptance criteria

3.5.4 Injury and ill health

In some circumstances, metrics related to injury or ill health offer the mostproper description of risk to persons. This applies if the frequency of accidentsresulting in injuries surpasses that of fatalities (Skjong et al., 2007), or for ac-cidents whose effects are non-fatal or delayed, like toxic or radioactive releases(HSE, 1992). Moreover, injury measures advantageously reflect variations insusceptibility (NSW,2008). Whether to use injury or fatality metrics is not onlya pragmatic, but also a moral problem. It is not given that a fatality represents amore severe consequence than a permanent disabling injury. One can imaginethe position of a severely hurt person experiencing considerable loss of lifequality, wishing she’d been struck just a little harder by the accident. Evenwhen taking a societal view, the economical costs of a permanently disabledperson might be of similar proportions to those of a lost life (Skjong et al.,2007).

Criteria for injuries and ill health may be expressed as acceptable levelsof surrogate endpoints causing injury or death, like heat radiation (kW/m) orreceived concentration of a toxic chemical (NSW, 2008). The ultimate effectdepends on the duration and mode of exposure, as well as the nature of the toxicmaterial. A different approach is suggested in EMS (2001), attaching relativeweight factors of 0.01 and 0.1 to minor and major injuries respectively. Withinthis method, it is possible to combine injuries and fatalities into a single riskcriterion.

Alternatively, health criteria can be expressed by quality adjusted lifeyears(QUALY). QUALY is obtained by multiplying expected life-years witha weight factor reflecting quality of life, ranging form 0 being equal to deadand 1 reflecting full health. QUALY uniquely reflects number of life years lostbeyond the binary question of survival or death (Johannesson et al., 1996).However, using QUALY as risk acceptance criteria appears difficult. Rather,it is utilized in cost-utility analyses in terms of net cost per QUALY gained,deriving conditional levels of acceptable risk (Lind, 2002a).

Page 61: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

4

Deriving risk criteria

4.1 Introduction

Formulating risk acceptance criteria is not a straightforward task. Not only canthe creator choose between a variety of risk metrics, there is also a spectrumof principles and methods for deciding on the specific risk level. Accordingto Nordland (1999), each approach attempts to rationally determine objectivelevels of acceptable risk. However, since it is difficult (if not impossible) tocalculate acceptable risk levels objectively, different societies have developeddistinct approaches. The establishment of risk acceptance criteria is thereforestrongly determined by historical, legal and political contexts (Hartford, 2009;Ale, 2005).

In this chapter, the basis and applicability of various approaches to settingrisk acceptance criteria are discussed. A distinction is drawn between funda-mental principles, deductive methods and specific approaches. Fundamentalprinciples represent ethical lines of reasoning, while deductive methods de-scribe how criteria are derived. Specific approaches covers the applied reason-ing in different regimes, based on combinations of fundamental principles anddeductive methods.

4.2 Fundamental principles

Utility, equity and technology are the three ’pure’ criteria for judging risk ac-ceptability (HSE, 2001b). These are of principal, i.e. ethical, nature, to be usedalone or combined as building blocks in the creation of practical approaches.Yet, HSE (2001b) admits that on their own, a universally accepted applicationis waiting. Since each offers only a single line of reasoning in a complex worldof risks, their practical and ethical implications are unavoidably contested. Thisis especially the case with utility- and equity-based criteria, offering contrary

Page 62: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

48 4 Deriving risk criteria

views on distributional issues. The different principles are illustrated in Figure4.1.

0

1

2

3

4

5

0

1

2

3

4

5

All risks must be kept below an upper limit

Risk must be as low asthat of a reference system

Risk acceptability is thebalancing of costs and benefits

0

1

2

3

4

5

6

Equity: Utility: Technology:

Ris

kB

enef

itRisk Risk

Figure 4.1. Three principal lines of reasoning

4.2.1 Utility

Utilitarian ethics, philosophically rooted in the thoughts of Bentham and Mill, isbased on the presumption that one shall maximize the good and minimize whatis bad for society as a whole (Hovden, 1998). When deciding on the introductionof a new technology, this implies the search for an optimum balance betweenits totality of benefits and negative consequences or costs. In the allocation ofrisk reduction expenditures, a balance between the costs and benefits related toa certain measure is sought (HSE, 2001b).

A central utilitarian assumption is that one shall look at the overall bal-ance for society, rather than the ones experienced by individuals. Utility ethicstherefore provides a powerful line of argument in legitimizing technologicalrisk to society. The consequence is that some of its members might suffer onbehalf of the society as a whole, as protested by Fischhoff (1994). Also HSE(2001b) recognizes this inherent deficiency , warning that a strict application ofutilitarian thinking imposes no upper bounds on acceptable risk, as only those

Page 63: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

4.2 Fundamental principles 49

deemed cost-effective are reduced. This pinpoints both the weakness and mer-its of utility based criteria. Unconditional levels of acceptable risk cannot beset (allowing unfair distribution of risk), since one always has to consider thetotality of goods and bads of a proposal (ensuring that the good is maximized).

4.2.2 Equity

The Ethics of equity has its origin in the moral reasoning of Rawls, stating thatone shall prefer this society even if unaware of one’s own position in it. Maxi-mizing the minimum, priority is given to the least advantaged (Hovden, 1998).The premise of equity-based criteria is that all individuals have unconditionalrights to a certain level of protection. Conversely, this yields a level of risk thatis acceptable for none of the members of society; encouraging standards andfixation of upper limit tolerability criteria (HSE, 2001b). Owing to this, the useof absolute risk acceptance criteria has its origin in the ethics of equity.

The claim that each member has an equal right not to experience high risk,stands in great contrast to the utilitarian principle. Amongst those favoringan equity-based reasoning is Fischhoff (1994), stressing that a technology mustprovide acceptable consequences for everyone affected by it. Fischhoff proposesthat a risk should be considered acceptable only if its benefits outweigh therisks for every member of society. One can question whether this is a pureequity-based reasoning, or if maximizing individual benefits comprise utilitarianelements on a personal level. However, its ethical core is still equity, as onelooks at the distribution of individual risks and benefits rather than the overallbalance.

Although ethics of equity is intuitively appealing, it leads to ineffective ap-plication of technology and risk reduction measures if carried out to its extreme(Hovden, 1998). Equity-based criteria also promote considerations of unrealis-tic worst case scenarios, distorting decisions through systematic overestimationsof risk.

4.2.3 Technology

The principle of technology assumes that an acceptable level of risk is attainedby using state of the art technology and control measures (HSE, 2001b). Riskacceptance criteria are set by comparison with systems following good practice.An example is the notion of ’adequate safety’ in the EU machinery directives(EU, 2006), requiring new machines to be at least as safe as comparable devicesalready on the market. However, what constitutes a comparable technology isdisputed, as is further discussed in section 4.4.3. Another manifestation is theconcept of ’minimum SIL’ employed in the Norwegian offshore and gas-sector,assisting in the establishment of SIL-requirements based on well-proven designsolutions (OLF 070, 2004).

Page 64: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

50 4 Deriving risk criteria

Unlike the principles of equity and utility, technology-based criteria do notreflect any explicit ethical tenet. Rather, it implicitly assumes that a technologyis ethically justified if present levels of risk are preserved. This representsboth an ethical and a technological shortcoming, as one is not necessarilyprovided with an understanding of how things ought to be (Breugel, 1998).Little incentives are provided for developing more efficient solutions, as earlyadvocated by Starr (1969). The principle is further criticized for ignoring thebalance between costs and benefits (HSE, 2001b), possibly favoring expensive,safe technology over slightly less safe, but inexpensive developments that peopleor organizations actually afford.

The technology-principle strongly resembles the method of bootstrapping,presented in section 4.3.2. A delicate distinction is that while pure technology-approaches look at De facto risk for comparable systems, the latter goes furtherin assuming that prior accepted risk levels apply across systems and risks ofthe future.

4.2.4 An alternative principle

Although not presented in HSE (2001b), Habermas’ ethics of discourse shouldfinally be mentioned. The principle stands out from the ones previously dis-cussed, by not claiming righteous truths on acceptable levels or balances ofrisk. What matters are not the criteria themselves, but the democratic processof formulation (Hovden, 1998). As long as criteria are determined through opentransactions and consensus between those affected by the risk, scientific ratio-nality is achieved by means of agreed formalisms (Linnerooth-Bayer, 1993).

Since the rightness of criteria is sought from democratic processes ratherthan analytical approaches, the ethics of discourse is not further examined.However, it does offer a valuable perspective, contemplating that criteria maybe judged unsuitable regardless of the analytical approach.

4.3 Deductive methods

In the pioneering work of Fischhoff et al. (1981), three methods for solvingacceptable risk-problems are examined. The quartet of social scientists are con-cerned with the meta decision problem of how to decide on how to decide,claiming that a lack of consensus on decision making methods has fosteredpoorly articulated rationales and idiosyncratic applications to risk acceptability.Whether this still is the case as a hundred of citations have past, is a conclu-sion too great to be answered at this point. Since the methods Fischhoff and hiscoworkers systematically evaluated still are in extensive use, valuable adviceare offered to the sound derivation of risk acceptance criteria. Their discussionon expert evaluation, bootstrapping and formal analysis are represented in thefollowing, noting the following requirements for acceptable-risk methods to be:

Page 65: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

4.3 Deductive methods 51

� Comprehensive

� Logically sound

� Practical

� Open to evaluation

� Politically acceptable

� Compatible with institutions

� Conductive to learning

4.3.1 Expert judgment

Letting the best available experts decide on what risk is acceptable, personalexperience can be integrated with professional practice and the desires of clientsor the society as a whole. Although experts are involved in most decisions onacceptable risk, what characterizes this method is judgment; professionals arenot bound by the conclusions of analysis, nor do they need to articulate theirrationale. Therefore, only the outcome of decisions are open to evaluation. Atypical situation of expert judgment is medical treatment, in which the doctoris trusted for taking decisions on behalf of the patient. Another example isthe setting of reliability standards for single components in a complex system.Both situations represent routine decision making of relatively limited scope,for which expert judgment is proven practical and cost-effective.

The method fails in comprehensive, irregular decisions, like whether togo ahead with a new technology. This is partly due to professionals’ oftenlacking ability to grasp the whole problem, and partially because complexsituations urge political discussions. When controversial decisions are taken byprofessionals, history has shown that they often serve as scapegoats, accusedfor overemphasizing technical issues at the expense of public concerns.

4.3.2 Bootstrapping

Bootstrapping means using the levels of risk tolerated in the past as basisfor evaluating future risks. There are two strong assumptions underlying thisapproach; that a successful balance of risks and benefits is achieved in the past,and that the future should work in the same way. The former is empirical,while the latter is of political character. Fischhoff et al. distinguish betweenfour bootstrapping approaches:

� Risk compendiums compare different situations posing the same level ofrisk. By example, Table 4.1 represents a selection of daily activities esti-mated to increase IRPA in any year by 10�6.

Page 66: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

52 4 Deriving risk criteria

� Revealed preferences are reflected in the market behavior of the public, as-suming that society has already reached an optimum balance between therisk and benefits of any existing technology. A new technology is acceptableif the associated risk does not exceed that of existing technologies, whosebenefits are the same to society (Starr, 1969). In contrast to technology-basedcriteria, benefits are explicitly considered. However, there are no consider-ations of how these are distributed, since market behavior does not reflectthe cost-benefit trade-offs of individuals.

� Implied preferences are read in legal records. Identifying implicit risk-benefit trade-offs for existing hazards, acceptability standards are set fornew technologies. The central assumption is that law and regulatory actionsrepresent society’s best comprise, between the public’s needs and the cur-rent economic and political constraints. The method is criticized for lackingcoherence and reinforcing bad practices, as laws may be context-dependent,poorly written and hastily conceived.

� Natural standards means deriving tolerable risk limits from exposure levelsof preindustrial times. Natural exposure is typically found through geologicalor archaeological studies. Unlike the other bootstrapping methods, naturalstandards are independent of a particular society and is therefore suited forglobal environmental risk problems.

Common for all but the latter is a strong bias towards status quo. Using past orpresent risk as reference level does not encourage future improvements. Histori-cal records only provide indications of accepted, i.e. implemented technologies,telling nothing about whether the associated risks were judged acceptable bythe public. Another deficiency is that acceptability judgments are taken withoutexplicitly considering alternative solutions. Even if so, no guidance is providedif both fail or pass the comparison. All methods fail in considering cumulativerisks from isolated decisions.

The advantage of bootstrapping-methods is their breadth. A broad spectrumof hazards are considered, attempting to impose consistent safety standardsthroughout society. The element of comparison also provides a risk numberthat is simple to grasp and easy to deduce. On the contrary, the weakness ofbootstrapping methods is their lack of depth. Decision problems are improperlydefined, decision rules imprecise and the outcomes unclear and poorly justified

4.3.3 Formal analysis

Formal analyses provide explicit recommendations on the trade-offs betweenrisk and benefits of acceptable-risk problems. They are intellectual technologiesfor evaluating risk, based on the premise that facts and values can be effec-tively and coherently organized (Fischhoff et al., 1981). Complex problems are

Page 67: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

4.3 Deductive methods 53

Activity Cause of deathSmoking 1.4 cigarettes Cancer, heart diseaseDrinking 0.5 liter of wine Cirrhosis of the liverLiving 2 days in New York or Boston Air pollutionTraveling 6 minutes by canoe AccidentTraveling 150 miles by car AccidentFlying 1000 miles by jet AccidentEating 40 tablespoons of peanut butter Liver cancerOne chest x-ray taken in a good hospital Cancer caused by radiationEating 100 charcoal-broiled steaks Cancer from benzopyreneLiving 150 years within 20 miles of a nuclear plant Cancer caused by radiation

Table 4.1. Risk compendium of activities estimated to increase the chance of death in anyyear by 10�6 ( Source: Wilson (1979), represented in Fischhoff et al. (1981))

decomposed into simpler ones, offering a powerful tool for regulatory and ad-ministrative institutions dealing with difficult risk issues (Vrijling et al., 2004).Owing to this, formal analysis is superior to bootstrapping and professionaljudgment in evaluating new hazardous technologies.

Fischhoff and his colleagues emphasize that formal analyses can be utilizedas either methods or aids. If interpreted as a method, it is given that anyone whoaccepts its use and underlying assumptions shall follow the recommendations.Alternatively, the recommendations can be seen as clarifying aids, addressingissues of facts, values and uncertainties.

Even simple formal analyses require highly trained experts, transferringacceptable risk decisions to a technical elite. Owing to this, their success isstrongly dependent on good communication with clients and the public. Thegreat advantages of formal analyses are their openness and soundness, providinglogical recommendations that are open to evaluation. Their conceptual frame-work helps identifying and sharpening the debate around risk issues, possiblyencompassing a broad range of concerns. However, as full-blown methods areexpensive and time consuming, only the most dominant concerns are included.And because what constitutes the most important concerns is ultimately a judg-mental question, the separation of facts and values is critical (yet Utopian) informal analysis. Two main types of formal analysis are presented in Fischhoffet al.; cost-benefit analysis (CBA) and decision analysis 1. CBA has accordingto Fischhoff et al. gained broader acceptance than decision analysis, due to theclaim of objective value measurement. Paradoxically, the mixing of facts andvalues is especially complex in CBA as they are implicit.1 None of these were developed for acceptable risk problems. Both assume e.g. a well-

informed, single decision maker or entity and immediacy of consequences. This is rarelythe case in decisions regarding complex risk issues.

Page 68: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

54 4 Deriving risk criteria

Cost-benefit analysis

Cost-benefit analysis provides a quantitative evaluation of the costs and benefitsin a decision problem, expressed in a common monetary unit. Restricting it-self to consequences amendable to economic evaluation, recommendations areproduced in pursuit of economic efficiency. Grounded in economic theory, thealternative that best fulfills the criterion of utility is recommended. Since theutilitarian principle ignores distributional issues, Fischhoff et al. report con-ceptual disagreement on whether equity considerations may be included in theanalyses.

There is no single CBA-methodology. As pointed out by French et al. (2005),there is rather a family of methods, sharing the same philosophical premise ofbalancing net expected benefits and costs. This balancing is an essential featurein the ALARP-approach of the following section, explicitly integrating CBAin a broader risk acceptability framework. What seems agreed upon, is thatcost-benefit optimization provides a necessary aid for evaluating risk reductioninvestments and judging the acceptability of new technological projects.

Requiring all costs and benefits to be monetarily expressed, CBA runs intoethical and practical difficulties in assigning the cost of loosing a human lifeand the benefit of saving one. An area where CBA and the value of preventinga fatality (VPF) is explicitly used, is the Swedish and Norwegian road transportauthorities. In a recent study by Elvik et al. (2009), the prevention of one roadfatality is valued at approximately 20 million NOK. According to Vatn (1998),there is no universal agreement on how to value lives. The problem can be seenfrom the perspective of the individual as well as the decision maker. Hammit(2000) reviews the theoretical foundation and empirical methods for estimat-ing the value of a statistical life (VSL), expressing the valuation of changesin mortality risk across a population. VSL represents what people in averageare willing to pay for an infinitesimal mortality risk reduction. This is not tobe interpreted as the amount an individual is willing to pay for avoiding cer-tain death to himself or an identified individual, as most people are willingto provide unlimited resources in such a situation (Hovden, 1998). The prefixstatistical is thus essential, avoiding unrealistically high values attached to theloss of human life (Vrijling et al., 1998). The VSL for each individual dependson age, wealth, baseline mortality risk and whether consequences are acute ordelayed. In a report prepared by Australian Safety and Compensation Council(2008), over 200 literature studies on VSL are reviewed, revealing great differ-ences between various estimations as shown in Figure 4.2. The estimated VSLdiffer between countries and across sectors, ranging from mean values of 11 to51 million NOK, in health and occupational safety respectively. Although thetheories of VSL are well-established, Hammit (2000) encloses his review bycalling for conceptual and methodological research, on how to account for riskcharacteristics other than probability and to value risk across different popula-

Page 69: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

4.3 Deductive methods 55

tions. For an overview on different expressions and quantities of the value of ahuman life, the reader may consult Skjong et al. (2007).

0

20

40

60

80

100

Aus

tralia

Can

ada

Sou

th K

orea

Japa

n

Fran

ce

Den

mar

k

Sw

eden

UK

US

VS

L es

timat

es (2

006)

Million NOK

Figure 4.2. Mean values of VSL-estimates by country (Source: Australian safety and com-pensation council(2008))

While determining the value of a human life in itself is a controversial task,taking into account that both costs and benefits come in time series adds morecomplexity to the issue. Fischhoff et al. (1981) report that CBA is hamperedby the absence of consensus on which rate of discounting, i.e. degree of de-preciation, to assign future costs and benefits. Lind (2002b) acknowledges thateconomic discounting is met by repugnance when the value of future lives andgenerations is under consideration. In response, Lind proposes a risk accept-ability function demanding discounting of financial quantities only, avoidingboth what he describes as the questionable concept of value of a human life,and the discounting of it.

Cost-benefit analysis is further criticized by Fischhoff et al. (1981) forwrongfully claiming value-immunity. Value preferences lie implicit in the po-litical choice of focusing on economical consequences, as well as in easy ma-nipulated marked data. Refined cost-benefit functions dealing more explicitlywith value concerns are proposed to the setting of risk acceptance criteria, forinstance by Rackwitz (2004) and Nordland (1999). These integrate compoundindicators of life quality concerns and public risk aversion respectively.

Page 70: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

56 4 Deriving risk criteria

Decision analysis

Decision analysis is based on the axiomatic decision theory for making choicesunder uncertainty, providing prescriptive recommendations given that its ax-ioms are accepted. The reader may consult e.g. Abrahamsen & Aven (2008) fora theoretical examination of the specific axioms. At the core of decision anal-ysis are utilities, meaning subjective value judgments assigned to the variousattributes of a decision problem. By subjectively weighing the importance ofeach attribute, consequences are evaluated relative to each other. The alternativeproviding the greatest utility over all consequences is recommended. There areseveral variants of decision analysis, having subjective utility functions as thecommon denominator. One of these is multi attribute utility theory, praised byFrench et al. (2005) in the following presentation on the ALARP- principle.

Both the values, what consequences to consider and the probabilities are indecision analysis subjective. As relative frequency data is not required, decisionanalysis is suitable for considering unique, as well as frequent events. Althougha frequentistic interpretation is not required in CBA, Fischhoff et al. (1981)report its prominence amongst cost-benefit analysts. In further contrast to CBA,decision analysis enables consideration of non-economic consequences. Hence,it has the advantage of considering whatever fact or value-issues of interest tothe decision maker. Not claiming an objective ground, the inclusion of attitudestowards risk is also naturally accommodated in the analysis.

The quality of recommendations rely on the quality of value judgments.Since values are often badly articulated, unconscious or contradictory actedupon, utility weights can be erroneously assigned. This calls for non-manipulativemethods for value elicitation and a conscious approach to risk framing. Addi-tional difficulties arise if multiple parties involved in societal decision makingdo not agree on the relative attractiveness of alternatives. Agreement is stilleasier sought than for methods claiming value-immunity, since judgments areexplicit and out in the open (French et al., 2005). A difficult question is whethercompany or regulatory decision makers are entitled to make value judgmentson behalf of the public. Somehow this must be resolved, as aggregating publicpreferences is an inescapable methodological difficulty according to Fischhoffet al. (1981). Assuming that the owner and the public have common interestin the success of a project, Ditlevsen (2003) is convinced that representativedecision analysis provides an upper risk acceptance limit agreeable to bothparties.

Page 71: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

4.4 Specific approaches 57

4.4 Specific approaches

4.4.1 ALARP

The ALARP-principle of ’as low as reasonably practicable’ is the British riskacceptability framework. Although widely recognized in Norway and othercountries, the principle is by far most institutionalized in the UK under HSE(Vinnem et al., 2006). HSE has prepared a series of guidance documents on theprinciple, with the so-called ’ALARP-trilogy’ of HSE (2001a), HSE (2003a)and HSE (2003c) providing high level guidance on the generic framework out-lined in HSE (2001b). Sector specific advice are found in e.g. HSE (2004), forcontrol of major accident hazards (COMAH).

A principal illustration of ALARP is given in Figure 4.3. This conceptual-ization is named the ’TOR-framework’, introduced in the report of tolerabilityof risk from nuclear stations in HSE (1992) 2. The framework is subsequentlyadapted for general applications in HSE (2001b). The expanding breadth ofthe triangle represents increasing levels of risk. At the top we find the darkestregion of unacceptable risks, whose magnitude demands reduction regardlessof the benefits of a proposal. Only in exceptional cases, like war, may risks ofthis region be retained. In contrast are the broadly acceptable risks of the bot-tom of the triangle, generally regarded as insignificant or adequately controlled.Between these outer zones is a region holding tolerable risks people are will-ing to tolerate for securing some benefits. Unlike the upper and lower zones,risks in this mid region cannot be claimed tolerable just because they happento fall within the limits. The crucial point is that a risk must have been reducedto a level as low as reasonably practicable to serve this designation. Whatconstitutes reasonable practicability is given by the ratio between the costs andbenefits of reducing a specific risk. This necessitates evaluations taken on a caseby case basis. Technology licensing in the UK is therefore denoted ’safety caseregulations’, reflecting the country’s common law tradition of regarding whatis not explicitly allowed as forbidden (Ale, 2005). In this legislative system, itis the responsibility of the operator to ensure that a risk is tolerable accordingto ALARP. In some cases, ALARP can be sought with rapid judgment, whilstformal analysis is required for situations of major or complex risk.

The setting of acceptability limits in the TOR-framework is guided by allthree pure criteria of section 4.3.1 (HSE, 2001b). While the lower regionsfollow a utilitarian rationale, the upper limit is set of equity concerns. Addi-tionally, technology-based criteria ease the criteria setting in all three regions.2 Sometimes the abbreviation SFAIR (’safe so far as is reasonably practicable’) is used

instead of ALARP. This term was introduced in the 1974 law of health and safety atwork, but was later developed into the notion of ALARP in HSE (1992). Althoughexpressing the same idea, the reader should note that they are not always interchangeable,due to the dissimilar wordings of legal proceedings (HSE, 2001a).

Page 72: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

58 4 Deriving risk criteria

Risk

Unacceptable regionRisk can only be justifiedunder extraordinary cricumstances

Tolerable regionRisk must be reduced ALARP

Broadly acceptable regionRisk is negligible and/or adequately controlled

Negligible risk

Figure 4.3. The ALARP-principle (adapted from HSE(1992))

In HSE (1992), the up most and lower limits of IRPA for workers in the nu-clear sector are suggested as 10�3and 10�6 respectively. These numbers are byno means universal, since the factors deciding whether a risk is unacceptable,tolerable or broadly acceptable are dynamic in nature. Melchers (2001) warnsthat tolerability changes particularly quickly when there is discontinuity in thenormal pattern of events, raising societal and political pressure for redefiningthe boundaries. Tolerability regions are still spelled out in guidelines and im-plicitly reflected through industrial practice. It should therefore be emphasizedthat these criteria are indicators rather than rigid benchmarks, calling for flex-ible interpretation through deliberation and professional commonsense (HSE,2001b).

Boundary between broadly acceptable and tolerable risk

The boundary between the broadly acceptable and tolerable region shall ac-cording to HSE (1992, p.10) be set ’by the point at which the risk becomestruly negligible in comparison with other risks that the individual or societyruns’. HSE (2001b) explains that the IRPA limit of 10�6 is given by trivial ac-tivities that can be effectively controlled or are not inherently hazardous. Thisis approximately three orders of magnitude lower than the level of backgroundrisk a person experience in his daily environment. With reference to Fischhoffet al. (1981), this is a bootstrapping approach, calling for careful considerationof the moral pitfalls in preserving status quo. But, as the strength of such anapproach is its practical feasibility, one can argue that bootstrapping a lowerlimit improves the manageability of ALARP-analysis. Morally, this may be eas-ier justified than approaching an upper criterion in the same manner, since the

Page 73: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

4.4 Specific approaches 59

former exists of utilitarian reasons whilst the latter touches equity concerns.With further reference to Fischhoff and his coworkers, it should be stressedthat no risk is acceptable unless it provides some benefits. Owing to this, thelower limit is necessarily conditioned on the benefits of a specific situation.

Boundary between unacceptable and tolerable risk

There are no widely applicable criteria defining the boundary between the tol-erable and unacceptable region. The argument of HSE (2001b) is that hazardsgiving rise to considerably high individual risk also invoke social concerns,which often is a far greater determinant of risk acceptability. This is line withDouglas (1985)’ preoccupation with the social dimension of risk perception,understanding risk acceptability as a social construct reaching far beyond an ob-jective claim of physical consequences. Even though HSE (2001b) recommendsthe use of individual risk criteria in most cases, a tolerability FN-criterion of 50fatalities per accident is provided for risks giving rise to social concerns. Thesuggested IRPA limit of 10�3 should hence be implemented with caution, alsoconsidering that it is a very lax limit most industries in the UK and Norway fallwell below (Vinnem, 2007). Quite paradoxical, this number is chosen exactlybecause most hazardous industries pose a substantially lower risk, making anexcellent example of how bootstrapping discourages improvement. However,as the upper limits serve only as the starting point of ALARP-improvement(Ale, 2005), this chain of thought serves as a rhetorical argument rather than aconceptual attack of the TOR-framework. A principal flaw is yet demonstrated,in that a lax upper limit may legitimize high risk to a small group of people,since risk falling below this limit risk are judged tolerable according to utilityrather than equity.

Tolerable risk

In the mid region, bounded by the upper and lower acceptability limits, riskmust be kept as low as reasonably practicable. A beneficial activity is consideredtolerable if, and only if (HSE, 2001b):

� All hazards have been identified

� The nature and level of risk is properly addressed, based on the best avail-able scientific evidence or advice and the results are used to determineappropriate control measures

� The residual risks are not unduly high and are kept as low as reasonablypracticable

� Risks are periodically reviewed to ensure that they still meet the ALARP-criteria.

Page 74: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

60 4 Deriving risk criteria

These requirements offer a strategy in the choice of risk reduction measures,as well as in the provision of conditional risk acceptance criteria. Seeminglytwo sides of the same task, a clear distinction is found in the practical useof ALARP in the UK and Norwegian offshore industry. While ALARP is theprovider of risk acceptance criteria in the UK, common Norwegian practice isto view ALARP as a risk reducing process that is conceptually independentof a predefined set of risk acceptance criteria (Aven & Vinnem, 2005). Thissection continues its focus on the cultivated ALARP-approach of HSE, whilethe implications of the Norwegian conceptualization are further discussed inchapter 5.2.

Determining that risks have been reduced ALARP involves an assessmentof the risk, the sacrifice (in money, time and trouble) of mitigating that risk anda comparison of these (HSE, 2008). Dependent on the nature of the hazard,the extent of the initial risk and the available control measures, the processdemands varying degrees of rigor. As a rule of thumb, the higher up a risk isplaced in Figure 4.3, the greater rigor is required. However, in many cases iscomplying with good practice sufficient to demonstrate ALARP.

Good practice

According to HSE (2003a, p.1), good practice is:

The generic term for those standards for controlling risk which havebeen judged and recognized by HSE as satisfying the law when appliedto a particular relevant case in an appropriate manner.

Good practice can be enshrined in written standards or come from unwrittensources, given that they are recognized to satisfy the established practice ofan industrial sector. The standards are set from HSE’s own experience andjudgment, international discussions and by the best industrial and expert adviceof advisory committees (HSE, 1992). Good practice shall not be confused with’best practice’, i.e. a standard considerably above the legal minimum. In theclearing document ’ALARP at a glance’, HSE (HSE) emphasizes that sincebest practices are not necessarily reasonably practicable, one should not seektheir enforcement before recognized by HSE as representing good practice.This advice stands in contrast to the GAMAB-approach presented in section4.4.3, and may intuitively seem to hamper improvement. Both GAMAB and thegood practice approach to ALARP are technology-based. The latter is, however,especially vulnerable to the fallacies of technology-based criteria if HSE fails tokeep up with the changing of technology and organizational practice. Anotherdifficulty is whether one can expect the same application for old and newsystems. Nordland (2001) argues that it is neither practicable, nor reasonable toretrospectively demand implementation of the latest safety technologies. In fact,continuously modifying an old system may actually be more dangerous than

Page 75: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

4.4 Specific approaches 61

retaining the original technology. Since the issue of practicality is substantial toHSE (2003a), existing installations are required to apply current good practiceonly in the extent necessary to satisfy the relevant law.

Costs, benefits and disproportionality

Complex decisions should not be taken based on good practice alone. In suchcases, good practice shall be followed by a consideration of the reasonablepracticality of further risk reduction (HSE, 2001b). If there is an evident dis-proportion between the costs and effectiveness of a risk reduction measure, thismay be qualitatively done through professional judgment. If the situation is lessclear-cut, for instance in high hazard industries or when introducing a new tech-nology, a formal analysis is necessary. With reference to Fischhoff et al. (1981),this may take the form of cost-benefit- or decision analysis. Within HSE, theprescribed method is CBA (HSE, 2001). For the fundamentals of CBA, thereader is referred to section 4.3.3 or HSE (2008) and annex 3 of NORSOKZ-013N (2001). Focal to this section is the feasibility of CBA in addressingALARP-concerns. The elements of CBA are in this regard not the overall costsand benefits of a system, but those related to reduction of risk in the particularsystem. Typical cost elements are those of installation, operation, maintenanceand productivity losses following risk reduction measures. Benefits are given inmonetary gains of reduced risk, like the value of preventing fatalities, injuriesand environmental damage or increased productivity. The analysis shall alsoaddress whether the introduction of a measure transfer risk to other employeesor members of the public (HSE, 2008). Although one is analytically comparingcosts and benefits, HSE (2001b) denotes the process as a comparison of riskagainst costs. Focal in this comparison is the notion of gross disproportionality,requiring measures to be employed unless their costs are in gross disproportionto the expected risk reduction. If several options fulfill this property, the combi-nation of measures providing the lowest residual risk is selected. The ALARPcriterion is determined by:

d � costs of risk reductionbenefits of risk reduction

(4.1)

A disproportionality factor d of e.g. 3, means that for a measure to be re-jected, the costs should be more than three times larger than the benefits. Inmany cases, this criterion is given by an evident point of rapidly diminishingmarginal returns (HSE, 1992). There are neither authoritative requirements onwhat ratio to employ, nor a formal algorithm for which factors to take intoaccount (HSE, 2008). An explanation is sought in Melchers (2001)’ critiqueof the ALARP-principle, contemplating that the critical words of ’low’, ’rea-sonably’ and ’practicable’ are all relative terms of value judgment. Differentmethods for calculating d have by others yet been proposed, e.g. by Bowles(2003).

Page 76: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

62 4 Deriving risk criteria

Vinnem et al. (2006) report a commonly used ratio of 6, while commentingthat an unfortunate misinterpretation amongst Norwegian offshore operators isthat a measure must be proven sufficiently beneficial in advance. As a rule ofthumb, HSE (2008) suggests a ratio of 2 for low nuclear risk to members of thepublic, while a factor of 10 is provided for high risk. Figure 4.4 illustrates thatthe disproportionality factor varies substantially according to where the risk islocated in the triangle. Just below the upper limit, considerable effort may berequired even for marginal risk reduction, whilst at the lowest border, expen-diture may not be justifiable at all (HSE, 1992). Since the rate of increase isunspecified, one can speculate if a high risk is easier judged ALARP when splitinto many small risks. What further complicates the picture, is that judgmentsof ’gross’ are also conditioned on the overall benefits of a technology; likeemployment, personal convenience or general social infrastructure. The ratiobetween costs and risk must therefore be evaluated in light of all circumstancesrelevant to each case, especially when considering high societal risk (HSE,2008). An important exception is that the size and financial ability of the dutyholder is not a legitimate determinant of disproportionality. The rationale isprovided in NORSOK Z-013N (2001), arguing that the economical perspectiveof society must be chosen to enable a comprehensive, global optimization.

Risk

d ≈ 10

d ≈ 2

Figure 4.4. Disproportionality in ALARP

Both proponents (HSE, 1992) and opponents of CBA in ALARP (Melchers,2001; French et al., 2005) agree that the method provides a far from precisecalculation, and that it cannot escape the ethical difficulty of valuating anddiscounting lives as outlined in section 4.3.3. In NORSOK Z-013N (2001),an additional constraint is highlighted in that the costs of risk reduction are

Page 77: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

4.4 Specific approaches 63

deterministic, while the benefits are probabilistic and theoretical. The expectedbenefits are mathematical only, and will never be realized in practice. Depen-dent on the occurrence of accidents, the final balance over a life cycle will thusbe either very negative or very positive. Since an installation may not econom-ically survive the worst case scenario, the maximum loss should therefore beconsidered in addition to the expected value.

The fallacies of cost-benefit analysis are thoroughly assessed by Frenchet al. (2005), comparing the suitability of CBA and multi attribute utility the-ory in ALARP-evaluations. The conclusions are in favor of the latter, due tofour problems precluding current CBA applications to ALARP. These con-cern the nonobjective pricing of safety gains, inconsistent valuation of groupand individual risk, the immoral discounting of trade-offs through time, andlacking theoretical justification and ad hoc use of disproportionality factors.Moreover, CBA is accused for being ill-defined and implicitly subjective. Multiattribute utility theory is claimed to address each and all of these concerns in amore satisfactory manner, in addition to structuring the debate between differ-ent stakeholders in productive ways. Within this method, a disproportionalityfactor is easily modeled by adjusting the relative weights of costs and benefits,and a simple framework is offered for addressing multiple fatality aversion. Theapproach is not without drawbacks in an ALARP-context, notably because thereis an explicit requirement in HSE (2001b) of comparing the monetary costs andbenefits of a risk reducing measure. Additionally considering the difficulty oftrading very different kinds of attributes, lack of consistency between decisionsis likely to result (French et al., 2005). Although multi attribute utility analysisis suggested as an alternative to CBA in HSE (1992), practical applications toALARP seem few in number.

ALARP in practice

HSE director Walker (2001) commends the TOR-framework for offering com-prehensive decision support that has stood the practical test of time in the UK.In the Norwegian offshore industry, however, obstacles are found in attaininga decision making process liberated from using quantitative risk analyses asthe sole basis of documentation. According to Vinnem et al. (2006), this seem-ingly owes to a widespread misinterpretation of ALARP being a general attitudeto safety, rather than a systematic, documented process to be followed up byresponsible authorities.

From the viewpoint of a plant owner, ALARP requires more effort thanadopting a set of predefined criteria, since evaluations are made on a case bycase basis (Aven, 2007). This is especially true if lack of good practice demandsa full cost benefit-analysis, which is extremely cost-and resource demanding(HSE, 1992). Strong authority involvement is also implied, evaluating whetherthe search for alternatives have been sufficiently wide and that the arguments

Page 78: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

64 4 Deriving risk criteria

relating to gross disproportion are valid (Ale, 2005). On the positive side, thepragmatic use of broadly accepted risk criteria may reduce conflict costs frompolitical compromises, as proposed by Starr & Whipple (1980). The existenceof a cut-off lower acceptability limit also provides time-saving decision supporton what is safe enough.

The lacking of a moral discipline

In the critical contribution of Melchers (2001), the ALARP-approach is accusedfor having serious moral implications. In addition to the commonly raised ob-jections of assigning monetary values to the benefit of risk reduction, Melch-ers is concerned with the dichotomy between socio-economic matters and themorality of risk issues. Based on the assertion that the requirements of reason-ableness and practicality lack openness, the approach is accused for excludingpublic participation in tolerability decisions, and rendering cover up of riskinformation in cases of economical or political importance. This claim can becontested by the promise of HSE (2001b) to involve all relevant stakeholdersin a transparent ALARP process. However, HSE provides little guidance onhow this is performed in practice. It can be suggested that the strength of theTOR-framework lies in its ability to capitalize the advantages of equity-, utility-and technology-based criteria, while there is a call for procedural inclusion ofthe alternative principle of discourse.

4.4.2 ALARA

ALARA is the Dutch acceptability framework, calling for risk to be reduced ’aslow as reasonably achievable’. The approach is conceptually similar to ALARP,with the distinguishing feature of not considering a region of broad acceptabil-ity. Figure 4.5 shows that broadly acceptable and tolerable risks are replacedby a common notion of acceptable risk. Until 1993, the region of negligiblerisk was part of the Dutch policy. Subsequently, it has been abandoned on thegrounds that all risks shall be reduced as long as reasonable (Bottelberghs,2000). The principle was originally launched by the International Commis-sion on Radiological Protection in the 1970s, for managing risks for which nonon-effect threshold could be demonstrated (HSE, 2002).

What is not explicitly allowed

The distinguishing features of ALARA are addressed by Ale (2005) in acomparative study of risk regulation in the UK and the Netherlands. DespiteALARA’s striking similarity with ALARP, their practical interpretations differgreatly. According to Ale, this primarily owes to the distinctive legal and histor-ical context in the two countries. In contrast to the common law tradition in the

Page 79: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

4.4 Specific approaches 65

Risk

Unacceptable region

Acceptable region (ALARA)

Figure 4.5. The ALARA-principle

UK, the Netherlands adhere to a legal system of ’Napoleonic law’; principallyregarding what is not explicitly forbidden as allowed. The criteria were untilrecently not legally fixed, but is now been enshrined in Dutch law (Hartford,2009). Consequently, demands for more safety must be announced by strictercriteria in the law, reducing the role of authorities to securing compliance withthe minimum requirements. Whereas upper risk criteria are the start of discus-sion (and not legally binding) in the UK, they are thus the end of discussion inthe Netherlands. This basically means that emphasis is on complying with thelimit rather than the reasonable practicality of further action. ALARA-criteriaare therefore more strictly anchored than the upper limits of ALARP. Applyingan aversion factor of 2, the Dutch curve is also steeper than the UK version.But do these dissimilarities yield different levels of actual risk? Ale (2005)concludes that the final results of spatial planning are surprisingly similar, andthat both countries are leading nations in the field of risk control. Although theunacceptable limit of ALARP is remarkably laxer than the Dutch version, theconditional criteria usually end up similar.

Calculations and judgments of reasonableness

Following Ale (2005), the distinction between ALARA and ALARP seeminglyowes to differing legal and political interpretations. But is there a conceptualdifference, as indicated by the last letters of the abbreviations? After all, thereis a fundamental distinction in demanding all risks to be reduced if reason-ably achievable, and the setting of a lower limit of safe enough. One wouldintuitively expect that not having a cut-off implies a limitless search for furtherrisk reduction, although the requirement of reasonableness hinders an uncriticalpursuit of safety at any cost. The absence of a lower limit makes the compar-

Page 80: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

66 4 Deriving risk criteria

ison of risk and costs a much finer balancing act than in ALARP, since thecriterion of gross disproportionality are known to lapse with decreasing levelsof risk. Reasonableness in ALARA is instead given by the point at which themarginal benefits exceed the marginal costs. Owing to this, a higher level ofprecision is required from risk assessments, which is a consequence also of thelegal necessity of demonstrating adherence to an upper limit (Ale, 2005).

Ironically, the search for ALARA is seldom considered reasonable in prac-tice. Jongejan (2008) reports reluctance among both plant owners and localgovernments to reduce risk beyond legal limits, which is partly related to acommon misinterpretation of the principle biasing on the side of safety. Themain explanation is yet found in the legal interpretation of ALARA, consider-ing political judgments of reasonableness as already built into the upper riskacceptability criteria (Hartford, 2009). Owing to this, a distinction between ac-ceptable and tolerable risk becomes meaningless, suppressing utilitarian con-cerns as ALARA becomes more of a token statement.

4.4.3 GAMAB

GAMAB is the acronym of the French expression ’Globalement au moins aussibon’, meaning globally at least as good. The principle prescribes the level ofrisk a new transportation system in France has to fall below, requiring newsystems to offer a total risk level that is globally as low as that of any existingequivalent system (EN 50126, 1999). A recent variant of GAMAB is GAME,rephrasing the requirement to at least equivalent (Trung, 2000). This criterionapplies to modified systems as well as new technologies, requiring the globalrisk to be at least equivalent as prior to the change. The conceptual distinctionbetween GAMAB and GAME is yet unclear. A possible interpretation is that ’atleast as good’ in GAMAB offers a wider interpretation of relevant factors than’equivalent’ in GAME. Since the abbreviations have the same ruling principle,and because both are almost exclusively used in the French railway industry,their distinctiveness is assumed irrelevant to this study.

Using existing technology as point of reference, GAMAB is a pure technology-based criterion. Applying this principle, the decision maker is exempted fromthe task of formulating a risk acceptance criterion, as it is given by the presentlevel of risk. However, to make the criterion operational, what is meant byglobally at least as good and equivalent system needs to be addressed.

An ethical dilemma

The term ’global’ is central to GAMAB. This means considering the totality ofrisk, ignoring how risk is distributed between different subsystems (Stuphorn,2003). As long as the global level of risk is improved, GAMAB does notvoice concern if parts of the system risk have increased. By example, a new

Page 81: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

4.4 Specific approaches 67

transport system offering enhanced safety to first class passengers may be judgedacceptable, even if the risk to people in the rear wagon has increased. Assuch, the notion of global opens up for trade-offs and overcompensation ofrisk (Nordland, 1999). Although technology and equity-based criteria do notprincipally contradict each other, the GAMAB criterion shows that pragmaticinterpretations of a pure technology criterion may lead to equity violations.

Learning-oriented bootstrapping

’At least as good’ means a risk level that is as low as or lower than the risk ofa comparison system. The simple criterion is then:

Risk metric � Risk metric best existing system (4.2)

According to EN 50126 (1999), the GAMAB-analyst is free to choose bothapproach and metrics for comparison, e.g. collision rate or PLL. This demandscalculation of the risk posed by both systems, leading to double work if no riskdata are available on the reference system (Schäbe, 2004).

Requiring equal risk levels obtained across systems, is by Skjong et al.(2007) referred to as ’the principle of equivalency’ and ’comparison criteria’in NORSOK Z-013N (2001). In none of these is GAMAB mentioned, seem-ingly leaving it up to the practitioner to consider local or global risk and whatsystem to choose for comparison. In NORSOK Z-013N (2001), it is suggestedthat a new solution shall not represent any increase in risk compared to currentpractice. This resembles the notion of ’good practice’ in simplified ALARPevaluations, believing that generally recognized codes of practice provide sat-isfying risk. The ’at least’ requirement of GAMAB goes beyond this, not onlyensuring that state of the art knowledge is taken into account, but also that fur-ther learning is encouraged. Since new systems are required to perform betteror as good as the best system on the market, GAMAB is a learning-orientedbootstrapping approach. However, it cannot escape the fundamental weakness ofbootstrapping, namely the erroneous assumption that the current level of riskis acceptable. This philosophical difficulty is addressed by Nordland (2001),concluding that acceptable risk criteria shall be determined from scratch.

..or an impediment to improvement?

What is meant by an ’equivalent system’ is difficult to specify, since theremight be large variations between systems providing the same service (Trung,2000). Both a high speed train and an aircraft offer transport to commutersfrom Trondheim to Oslo, but the number of travelers and the speed of travel-ing differ greatly. One of the transportation modes may also be considerablymore expensive. Are the two systems then comparable? Rausand & Utne (2009)draw parallels between GAMAB and the EU machinery directive, questioning

Page 82: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

68 4 Deriving risk criteria

whether one can rightfully compare an inexpensive device to a far more ex-pensive variant. Since cost-benefit considerations are not required in GAMAB,Trung (2000) similarly claims that unrealistic safety objectives may be gener-ated. In this regard, one can ask if GAMAB is hindering rather than promotingimprovement, rejecting alternatives on erroneous standards of reference.

4.4.4 MEM

MEM is the acronym for ’minimum endogenous mortality’, a German principlerequiring new or modified technological systems not to cause a significantincrease in IRPA to any person (Schäbe, 2004). The probability of dying ofnatural causes is used as reference level for risk acceptability. MEM is basedon the fact that death rates vary with age, and the assumption that a portionof each death rate is caused by technological systems (Nordland, 2001). UnlikeALARP and GAMAB, MEM offers a universal quantitative risk acceptancecriterion, derived form the minimum endogenous mortality rate.

Endogenous mortality

’Endogenous mortality’ means death due to internal causes, like illness ordisease (Stuphorn, 2003). In contrast, exogenous mortality is caused by theexternal influences of accidents. The endogenous mortality rate is the rate ofdeaths due to internal causes of a given population at a given time. Figure 4.6displays the endogenous mortality of various groups of ages in Norway in 2007.Not unexpectedly, the maximum rate is found amongst the oldest population,whereas youngsters have the lowest rate of occurrences. Children within the ageof 5 and 15 have the minimum endogenous mortality rate, which in westerncountries is known to be 2 � 10�4per year, per person in average (EN 50126,1999). The MEM-principle requires any technological system not to impose asignificant increase in risk compared to this level of reference.

The significance of increase

According to the railway standard EN 50126 (1999), a ’significant increase’ isequal to 5% of MEM. This is mathematically deduced from the assumption thatthere are roughly 20 types of technological systems (Trung, 2000). Amongstthese are technologies of transport, energy production, chemical industries andleisure activities. Assuming that a total technological risk in the size of the min-imum endogenous mortality is acceptable, the contribution from each systemis confined to:

R D Rm

20D 10�5.person=year/ (4.3)

Page 83: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

4.4 Specific approaches 69

0,0001

0,001

0,01

0,1

1

MEM

Figure 4.6. Endogenous mortality in Norway, 2007 (Source: Statistisk sentralbyrå)

A single technological system thus poses an unacceptable risk if it increasesIRPA with more than 5% of MEM, which roughly lies within the limits ofnatural statistical variations (Nordland, 1999). It should be emphasized that thiscriterion concerns the risk to any individual, not only the age group providingthe reference value. According to Rausand & Utne (2009), the specific limit isultimately for the decision maker to choose. This is mainly due to the difficultiesof determining the significance of risk increase.

There are strong assumptions underlying the MEM-criterion, notably thatpeople may be exposed to more or less or more than 20 technological systems.For each system, the acceptable IRPA are then laxed or sharpened respectively,as pointed out by Stuphorn (2003). Since the number of technological factorsincrease on a daily basis, the acceptance criteria must be periodically updated(Trung, 2000).

Implicit in these calculations is the assumption of an accident not resultingin more than 100 fatalities. This is a reasonable assumption for transportationsystems, but may not hold for larger technological systems. For potential conse-quences beyond this number, the limit will decrease (EN 50126, 1999). Figure4.7 visualizes a MEM-based FN-curve of constant acceptability frequency upto 100 fatalities, while decreasing with an aversion factor of -1 in the largerend of scale. This reasoning is difficult to grasp, as it embodies societal risk inwhat is principally an IRPA-criterion. It is considered sufficient that the readeris aware of this presumption. A final assumption is that the common MEM-approach only considers fatality risk. For technological systems posing minoror major injury risk, modified MEM-criteria of 10�3 and 10�4 are proposed.

Page 84: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

70 4 Deriving risk criteria

Number of fatalities

Tole

rabl

e IR

PA

10-5

100

Figure 4.7. The MEM-criterion holds for accidents resulting in maximum 100 fatalities(adapted from EN 50126 (1999))

Moral justification of MEM

In contrast to GAMAB, MEM can be apportioned to subsystems (Schäbe,2004). Dependent on the pragmatic apportionment of risk, it may thus considerboth distributional and global risk issues. Like GAMAB, the explicit notionof MEM is seldom found outside its country of origin, although similar con-cepts are used by many regulators. Skjong et al. (2007) describe the commonapproach of ’comparison with known hazards’, in which risk criteria are setby comparing technological risk to those implicit in human activities. MEM isa subset of this broad approach, with the distinguishing feature of comparingagainst internal causes only. It can be suggested that MEM is a variant of thenatural standards approach to bootstrapping, while the method described bySkjong et al. is more of a risk compendium or revealed preferences approach.Although both methods have the fundamental bias of assuming that the refer-ence risk is acceptable, the rightfulness of MEM may be easier claimed due toits natural standard of reference. But, since 2 � 10�4 natural fatalities per yearequals the death of almost 1600 German children (Nordland, 1999), it is byno means given that the technologically caused decease of an equal number ofchildren is acceptable.

4.4.5 The Precautionary principle

The precautionary principle differs from the other approaches of this chapter.Common for these is that they are all risk-based, meaning that risk manage-ment relies on the numerical assessment of probabilities and potential damages

Page 85: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

4.4 Specific approaches 71

(Klinke & Renn, 2002). In contrast, the precautionary principle is a precaution-based strategy, for handling uncertain or highly vulnerable situations. Klinkeand Renn reason that a risk-based approach of judging numerical risks rela-tive to each other becomes meaningless if based on very uncertain parameters.Owing to this, precaution-based approaches do not provide quantitative criteriafor which risks can be compared against. Risk acceptability is rather a mat-ter of proportionality, between the severity of potential consequences and themeasures taken in precaution.

Intention and use

The original definition of the precautionary principle is found in principle 15of the UN declaration from Rio in 1992 (United Nations, 1992):

Where there are threats of serious or irreversible damage, lack of fullscientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation.

Wilson et al. (2006) discuss several definitions of the principle, finding a com-mon interpretation in that complete evidence of harm does not have to existfor preventive actions to be taken. An alternative interpretation is that ’absenceof evidence of risk’ should not be taken as ’evidence of absence of risk’ HSE(2001b). The precautionary principle is hence a guiding philosophy when thereare reasonable grounds for concern of potentially dangerous effects, but thescientific evidence is insufficient, inconclusive or uncertain (EU, 2000). DeFur& Kaszuba (2002) note two cases in which the principle is most useful, i.e.situations of present uncertainties and when new information will radically alterwell-known situations. In the latter case, valuable counterbalance is offered tobootstrapping methods encouraging preservation of status quo.

The precautionary principle is an outgrowth of the increased environmen-talist awareness since the 1970s, acknowledging that the scale of technologicaldevelopment by far have exceeded our predictive knowledge of environmentalconsequences (Belt, 2003). In the Rio Declaration, the principle is explicitlyprescribed to the environmental field. Consulting EU’s communication on theprinciple (EU, 2000), its scope is claimed far wider in covering both envi-ronmental, human, animal and plant effects. Common to all is the concern oflong-term effects, irreversibility and the well-being of future generations. DeFur& Kaszuba (2002) reports applications to the areas of food safety, persistentorganic pollutants and even in the prevention of worldwide computer crashesin the late 90s. Trouwborst (2007) sees EU and generalists like DeFur andKaszaba as fighting a lonely battle, claiming that legal instruments explicitlylinking the principle to non-environmental consequences are few in number. Aplausible explanation is provided by Trouwborst himself, calling attention tothe often ignored distinction between the exercise of precaution as such (erringon the safe side) and the precautionary principle.

Page 86: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

72 4 Deriving risk criteria

Invoking the precautionary principle

EU (2000) describes the application of the precautionary principle as a three-stage process. In the first stage, recourse to the principle is triggered, followedby a decision stage of whether or not to invoke the principle, and an eventualstage of selecting precautionary measures. The decision to act is of politicalcharacter, whereas the other stages are scientific, having to comply with thegeneral principles applicable to all risk management measures. Every applica-tion shall be considered within a transparent, structured analysis of potentiallynegative effects and scientific uncertainty. Owing to this, applying the precau-tionary principle does not mean that measures are adopted on an arbitrary ornon-scientific basis. The claim of scientific rigor is questioned by Carr (2002),reasoning that the principle is utterly a moral and political idea.

The political decision of invoking the principle appears in situations wherethere is good reason to believe that serious harm might occur (even if thelikelihood is remote), and the current information makes it impossible to moveto the next stages of risk assessment with sufficient confidence (HSE, 2001b).According to the EU commission (EU, 2000, p.16), the appropriate responseis:

The result of a political decision, a function of the risk level that is’acceptable’ to the society on which the risk is imposed.

The quote is not chosen due to its preciseness. The link between acceptable riskand precautionary measures is unclear, as the commission seemingly assumesthat the one already knows the acceptable level of risk. This is problematic ofmany reasons, notably due to the self-contradiction of requiring an unascer-tainable risk to be below some fixed level. Jongejan (2008) presents anotherobjection in the bias of taking action based of risk characteristics alone; ignor-ing that risk is only part of the bigger picture. One can alternatively suggestthat the precautionary principle yields acceptable risk by own means, wherewhat is acceptable is a function of the appropriate measures (not the other wayround), and the appropriate measures is contingent on the case-specific benefits,consequences and scientific uncertainties.

Precautionary measures

Appropriate measures range from total ban, to the funding of a research programor non action, and must according to EU (2000) be:

� Proportional to the severity of the threat

� Non-discriminatory in their application

� Consistent with measures already taken

Page 87: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

4.4 Specific approaches 73

� Based on an examination of the potential benefits and costs of action/nonaction

� Subject to review in the light of new information

� Capable of assigning responsibility for producing the scientific evidence.

The last requirement has been subject to numerous discussions on the pre-cautionary principle. A common interpretation is that the burden of proof isreversed, meaning that a new product or technology is deemed dangerous untilits developer can prove the opposite. However, the reversed onus of proof isnot a standard consequence of the principle. According to Trouwborst (2007),it is a radical instrument imposing great costs on the proponents of a newtechnology, reserved for potential situations of great irreversible harm. Thisinterpretation presupposes that it is ultimately up to the society to ensure thatproducts bring low risk. But haven’t developers a genuine interest in their prod-ucts being safe? The current trends of ethical awareness, warranty claims andreputation cultivation, imply that providing the onus of proof are advantageousto developers. Alternatively, Belt (2003) suggests that the polarized discussionon proof bearing is just a proxy for the larger debates surrounding the futureof e.g. agriculture.

Debating the precautionary principle

The precautionary principle has fostered remarkable academic debate, with cri-tique arising from two distinct camps. On one side are those concerned with thecosts of the principle (like debaters over the reversed burden of proof), whilethe other camp is occupied by critics unfamiliar with its applications and un-derlying principles (DeFur & Kaszuba, 2002). Wilson et al. (2006) studied theapplication of the principle among senior policy makers in Canada, concludingthat lack of clarity on when to act is limiting its effective use. Similarly, Belt(2003) argues that the current definitions fail to prescribe the precise conditionsunder which it is invoked and what action to employ. The practical implica-tions are demonstrated in a recent study of Lyster & Coonan (2009), reportinginconsistent application of the principle in Australian courts.

In a critique of EU (2000), Carr (2002) calls for a strengthening of theethical and value-based aspects of the decision stage, as a means for justifyingprecautionary recourse to trade partners and the public. Following Carr, sucha justification is a necessary counterweight to critics accusing the principlefor stifling technological innovation. Amongst these are Balzano & Sheppard(2002), prophesying that the current formulations endanger institutionalizing ofexcessive caution, having disastrous effects on society by leaving it susceptibleto decay. The opponents sentence the Rio definition as inherently flawed, en-couraging ineffective and costly measures in a Utopian pursuit of ’full scientific

Page 88: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

74 4 Deriving risk criteria

certainty’. Due to the discrepancy between the promise of scientific knowledgeand the practical lacking of it, Tannert et al. (2007) see the principle as prob-lematic to regulatory practice. The REACH-legislation for chemicals in Europeis provided as an example, pinpointing the missing correspondence between theprecautions taken to deal with uncertainties and the constant demand for furtheranalysis. Whether this is a valid accusation can yet be questioned. Renn (2008)claims that REACH does not relate to any definitions of the precautionaryprinciple, except from the much debated issue on burden of proof. Trouwborst(2007) recognizes that scientific certainty is a confusion-prone issue, stressingthat the precautionary principle prescribes action in spite of uncertainty, notbecause of it.

Balzano & Sheppard (2002) accuse the precautionary principle for beingbiased towards perception of fear and lacking the operational qualities neededin regulatory decision making. Not only may the principle be invoked by fear,it can also invoke it by amplifying unrealistic risk perceptions. The applicationof precautionary measures must therefore be weighted against the outcomes,whether it is anxieties or unforeseen consequences of poor action. The publicskepticism towards nanotechnology serves as an example. Consulting Phenix& Treder (2004) at the Center Responsible for Nanotechnology, a strict imple-mentation of the principle will give rise to the severest of risks. Not only mayno alternative solutions may be found for pressing, global problems, but theworld will also be unequipped to deal with responsible use of nanotechnologyin the future.

The precautionary principle holds both practical and conceptual strengthsand shortcomings. Neither does it provide quantitative risk acceptance criteria,nor can it lessen the controversy of difficult decisions. Nevertheless, it doesoffer a valuable perspective in a discussion of what is safe enough, as many ofthe greatest decisions on risk are taken under considerable uncertainty.

Page 89: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

5

Concluding discussions

The overall objective of this study is to discuss and create a sound basis forformulating risk acceptance criteria. Fundamental to this aim is a basic under-standing of the concepts of risk and risk acceptance, which are clarified andproblematized in chapter 2. In chapters 3 and 4 respectively, the problem isexplicitly addressed through the sub objectives of discussing the main conceptsand quantities used to formulate risk acceptance criteria, and questioning thebasis and applicability of the various approaches to setting risk acceptance cri-teria. An integral part of these discussions are conceptual problems related torisk acceptance criteria, as prescribed in the fourth objective of the study. Forthis reason, the most valuable findings are the nuances and contrasts providedin these discussions, pinpointing fallacies and strengths of the various metricsand approaches.

Readers familiar with the subject may notice that an ongoing debate of therecent years is omitted. In the academic crusades led by Aven and his coworkersat the University in Stavanger, the value of risk acceptance criteria per se isquestioned1. The final chapter follows this thread, evaluating the metasoundnessof seeking a sound formulation of risk acceptance criteria. As the ultimatepurpose of risk acceptance criteria is to aid decision making on risk, twointerrelated questions are raised in these concluding discussions:

� Are risk acceptance criteria feasible to the decision maker??

� Do risk acceptance criteria promote good decisions?1 In their critique of risk acceptance criteria, Aven and his coworkers refer to the fixation of

an upper limit of acceptable risk. Such a limit is denoted ’absolute probabilistic criteria’by Skjong et al. (2007), and can be seen in contrast to trade-off based criteria

Page 90: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

76 5 Concluding discussions

5.1 Are risk acceptance criteria feasible to the decision maker?

Risk acceptance criteria are claimed to provide the rationale for evaluatingcalculated risk. Such evaluations take place over a variety of problems andcontexts, ranging from settlements on introducing a new technology to localoptimization of technical solutions. Common for all is that a decision must betaken, necessitating some kind of decision criterion to arrive at a conclusion.If not, the faith of hazardous technologies could be as accidental as that ofthe main character in the novel ’The stranger’ by Camus (1942); indifferentlyfighting a lost battle towards the games of coincidence. But are risk acceptancecriteria practical seen through the eyes of the decision maker? Or does he, likethe character Meursault, feel weighted by the evaluation criteria?

5.1.1 Non-contradictory ordering of alternatives

According to Douglas (1985), a rational choice presupposes non-contradictoryordering of the relative desirability of alternatives. This coincides with theintention of risk acceptance criteria, i.e. to provide an objective means forordering issues of risk. However, there are reasons to claim that risk acceptancecriteria provide inconsistent decision support. This can be suggested to workon two levels; one that touches the fundamental deficiency of single valued riskacceptance criteria, the other being pragmatically conditioned. Abrahamsen &Aven (2008) are concerned with the former, rejecting the use of risk acceptancecriteria in isolation from other concerns. Evaluating FAR within two axiomatictheories of decision making, Abrahamsen and Aven conclude that absoluteFAR-based risk acceptance criteria provide inconsistent recommendations. Infact, this holds for all metrics when used in isolation, as the relative desirabilityof a set of options may change if two decision problems are differently framed.The swine flu serves as a banal example, in which the desirability of gettinginoculated is determined not only by the swine flu fatality risk, but the knownside effects of the vaccine and the queue at the public health service. For thisreason, Aven and Abrahamsen argue in favor of the trade-off based approachof ALARP. The reader should note that also ALARP may provide inconsistentrecommendations, due to the lacking operationality of gross disproportionality.

The conclusion of Aven and Abrahamsen largely coincides with Evans &Verlander (1997)’s critique of FN-criterion lines. A difference between FN-criterion lines and criteria expressed by FAR, PLL, IRPA or LIRA, is that thelatter provide unambiguous advice if risk is agreed to be the sole attributeof importance. FN-criterion lines may provide unclear recommendations evenunder this simplified assumption. Figure 5.1 illustrates two options whose cal-culated risk lies partly above the criterion line, representing a decision problemof which the decision maker is provided with little but confusing aid. The readershould note that the criterion line is drawn with an aversion factor of -1. Adopt-ing the Dutch practice of setting ˛ to -2, option A stands out as the preferable

Page 91: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

5.1 Are risk acceptance criteria feasible to the decision maker? 77

N

F Risk acceptance criterion line

Calculated risk, option B

Calculated risk, option A

Figure 5.1. Contradictory ordering of alternatives with respect to FN-criterion lines

choice. The feasibility of FN-criterion lines are thus pragmatically conditionednot only on the relative set of options, but also on assumptions of risk aversionand initial anchoring. Rhetorically, one can ask whether this decision problemis any less ambiguous without the presence of an acceptability criterion line.The relative desirability of the two options is equally unclear with respect torisk. However, as the decision maker is free to weight other concerns, the alter-natives may possibly be ranked in a clear order of preference, reinforcing thearguments of Abrahamsen & Aven (2008).

5.1.2 Preciseness of recommendations

The most important quality of risk acceptance criteria is according to NORSOKZ-013N (2001) that local conditions and the effect of risk reducing measuresare reflected. The importance of considering realistic exposure should be reem-phasized. If the decision maker is to compare an overall acceptance criterionwith a theoretic risk aggregated over an inhomogeneous group of people, im-precise recommendations are likely to follow. This is a pragmatic restrictionthat holds for all risk metrics, but is hardly valid as a generic argument againstthe use of risk acceptance criteria. If properly accounted for, this is actually abenefit of using risk acceptance criteria. Particularly suited are IRPA and FAR,advantageously allowing for precise accounting of exposure. Whether followingthe ethics of utility or justice in allocating risk reducing measures, IRPA andFAR may thus provide the decision maker with a precise term of reference.PLL on the other hand, is ill suited for this purpose, as neither exposure, norlocal variations are reflected. This is why PLL is seldom used as an absoluteprobabilistic criterion, but rather as input to overall ALARP-analyses. Althoughpreciseness by and large is given in the choice of risk metrics, it is also con-ditioned on how criteria are derived. GAMAB stands out in this manner, as itby definition is concerned with overall aspects exclusively.

Page 92: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

78 5 Concluding discussions

5.1.3 A binary decision process

Regardless of the consistency or preciseness of recommendations, risk accep-tance criteria are of limited value if they are impractical to real life-decisionmaking. A polarization is seen between the absolute criteria provided byGAMAB and MEM (and the common interpretation of ALARA), and the con-ditional criteria of ALARP and the precautionary principle. The trade-off anal-ysis of ALARP is recognized to be a resource intensive task, posing extensiverequirements to both regulatory and company involvement. In comparison withabsolute risk acceptance criteria, ALARP is a cumbersome process that is un-likely to succeed if regulatory supervision and incentives are not in place. Thispartially explains the moderate success of ALARP-processes in the Norwegianoffshore sector, as reported by Vinnem et al. (2006). The issue is by HSEpartly resolved through standards of good practice, easing the ALARP-processfor operators of well-known technologies. ALARP may also be qualitatively for-mulated (e.g. by risk matrices), offering a practical advantage in cases wherequantitative data are lacking. Particularly relevant is this in comparison withGAMAB, as data on the reference system may not exist.

Aven et al. (2006) admit that absolute criteria provide a binary decisionmaking process that is utterly practical. As conceptualized in Figure 2.5, thedecision maker simply has to conclude whether the described risk is above orbelow a cut-off limit. In an ALARP-evaluation, formula (4.1) of gross dispro-portionality serve as a like criterion. But, as the disproportionality factor byno means is absolute, ALARP cannot be claimed the same user friendlinessas MEM or GAMAB. Even more complicated is the precautionary principle,which is rightly accused for lacking clarity concerning when and how the princi-ple is to be invoked. Comparing the practicality of the precautionary principleagainst the other approaches is indeed an unfair match, as it is reserved forsituations of great uncertainty and is thus fundamentally distinct.

5.1.4 Risk acceptance criteria simplify the decision process

Although conceptual problems are identified, it is in the opinion of this authorthat risk acceptance criteria provide considerable aid for reaching decisions onrisk. Seen through the eyes of the decision maker, absolute criteria expressedby single value metrics are likely to simplify the decision making process.Also ALARP and ALARA-evaluations provide efficient aids, although theirrecommendations appear less clear-cut and relatively time-consuming. However,considering the conceptual complexity of risk acceptance, it is evident thatsimplification comes with a price. Having questioned if risk acceptance criteriaoffer practical decision support, an equally important question remains to beasked; do risk acceptance criteria promote good decisions?

Page 93: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

5.2 Do risk acceptance criteria promote good decisions? 79

5.2 Do risk acceptance criteria promote good decisions?

A good decision is according to Fischhoff et al. (1981) one that addresses all fivecomplexities presented at p.22. As these are mostly pragmatic, i.e. dependent onthe specific application in a certain situation or company, they are not the focusof this final discussion. Rather, generic problems related to risk acceptabilityare addressed.

To judge what constitutes a sound decision, the perspective must necessarilybe widened from the concern of the decision maker to include all actors affectedby the risk. In the eyes of a company, a good decision is one that enables aproper balance between production and protection, as conceptualized by Reason(1997) in Figure 5.2. The figure works at a societal level as well, but from agovernmental point of view the balance is intensely intertwined with politicaland ethical considerations. For the public, a good decision is one that is in linewith personal levels of acceptable risk, resulting from the trade-off of factorsdescribed in section 2.9.

Protection

Production

Bankruptcy

Catastrophe

Parity zone

Low hazard ventures

High hazard ventures

Figure 5.2. The relationship between production and protection (adopted from Reason, 1997)

Ethical, trade-off and strategic aspects to the goodness of risk acceptancecriteria are in the following discussed. Since all issues hinge on the assumptionthat an objective criterion for acceptable risk may be set, the thread from section2.4 must first picked up.

5.2.1 The interpretation of probability to risk acceptance criteria

According to Nordland (2001), the risk-based approaches of ALARP, ALARA,GAMAB and MEM are all based on the assumption that an objective level ofacceptable risk exist. With reference to chapter 2, this assumption holds not onlyone, but two highly speculative beliefs; that both risk and risk acceptance can be

Page 94: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

80 5 Concluding discussions

objectively expressed. As the latter presupposes the former, it is fundamental toaddress the implications of subjective probabilities on the use of risk acceptancecriteria.

Intuitively, risk acceptance criteria seem meaningless if probability, andhence risk, cannot be claimed an objective existence. If two people can assigntwo distinct probabilities to the same event, how is the risk to be evaluated ifone falls below the criterion line and the other one over? And on what term ofreference may a criterion be set to claim sovereignty? Even simple methods ofbootstrapping will fail, due to the apparent impossibility of demonstrating anobjective level for comparison. These are extensions of the objections of Watson(1994), concluding that a subjective interpretation of probability necessarilyreduces the role of probabilistic safety analysis to an advisory one. Accordingto Watson, this recognition stands in alarming contrast to the wordings ofAmerican regulations, regarding risk analyses as the legitimate provider of’truths’. Also in today’s Norway, this seems to be an implicit assumption inmost regulations (Aven, 2007). Yellmann & Murray (1995) mock Watson forhaving an extreme reaction to the unpleasant recognition that no risk analysiscan ever by perfectly objective, rhetorically asking whom you can trust if youcannot trust your probabilistic safety analyst.

Analytical consensus and practitioner judgment

A substantial tenet of De Finetti (1974) is that probabilities are conditioned onone’s current state of knowledge. Since the overall task of risk analysts is toseek knowledge about risk, arbitrary values are by no means assigned to neitherinput, nor output probabilities. While probabilities for well-known technologiesmay be assessed through experience databases or physical experiments, prob-abilities of future systems are inferred through advanced models and expertjudgment. Although the latter can be claimed a larger element of subjectivity,analytical consensus may be sought in both cases. Owing to this, Vatn (1998)prefer the notion of fur üns probability, characterizing the agreed probabilitiesamongst a knowledgeable group of risk analysts. Following this interpretation,it is perfectly meaningful to draw reference lines for comparison of risk, aslong as one accept the assumptions made explicit in the analysis and the for-mulation of risk acceptance criteria. Aven (2007) agrees to this position, whilestressing that acceptance criteria must not be mistaken to represent benchmarksof objective truths.

Subjective probabilities cannot be seen to weaken the value of risk accep-tance criteria per se. However, what may pose a problem, are regulators andpractitioners interpreting criteria and risk assessments as objective referencesproviding rigid cut-off limits. Acknowledging that additional information mayalter the risk assignments as well as the criterion lines, the decision makershould exercise judgment if the calculated risk falls close to the limits. This

Page 95: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

5.2 Do risk acceptance criteria promote good decisions? 81

holds for all probability-generated risk metrics, deduced from all risk-basedapproaches. A special concern is voiced for MEM, as it is the principle mostexplicitly announcing an objective level of reference.

Erring on the side of safety

The importance of avoiding strict interpretation of risk acceptance criteria isperhaps greater following a frequentist interpretation. Evaluating ALARP inlight of the two schools of probability, Schofield (1998) concludes that the rel-ative frequency approach represents significant problems of model validation.The subjective interpretation on the other hand, is found to offer a powerful per-spective for trade-off analyses in the ALARP-region. Uncertainty in estimationsare particularly large for low frequency/high consequence events, in a frequen-tist’s search for an objective quantity. A risk located in the upper tolerabilityregion in Figure 4.3 may in such a case ’actually’ lie above the unacceptablelimit. This also holds for Bayesian calculations, with the important distinctionthat uncertainties are assigned to the actual value. This is why conservativejudgments are often preferred over best estimates, erring on the safe side inthe face of uncertainty. While NORSOK Z-013N (2001) recommends the useof best estimates, a conservative approach is defended in e.g. NSW (2008). Insome cases, the epistemic uncertainty may be so large that one cannot evenknow whether a prediction lies in the conservative ballpark. The introductionof nanotechnology serves as a timely example, whose associated risks are souncertain that comparison with predefined criteria cannot be justified, regard-less if one is of Bayesian or frequentist conviction. In such cases, the decisionmaker must rather turn to the Precautionary principle

5.2.2 Ethical implications of risk acceptance criteria

Examining the ethical justification of risk acceptance criteria, Aven (2007) con-cludes that there are no stronger ethical arguments for using absolute risk accep-tance criteria compared to trade-off based regimes. While there are argumentsboth pro and con the use of risk acceptance criteria, these are not primarily ofan ethical character. According to Aven, there should be no discussion on theneed for considering all ethical stances of utility, justice, discourse and ethicsof the mind 2. What should be debated, is rather the balance of the variousprinciples and concerns. This balancing act can be suggested to work at twolevels; explicitly in the choice of approach, and implicitly in the selection ofrisk metrics.2 Ethics of the mind are rooted in the philosophy of Immanuel Kant. This line of reasoning

states that the rightness of an act is not determined by its consequences. Rather, actionsare correct in and of themselves without reference to other attributes, because they stemfrom fundamental obligations (Hovden, 1998).

Page 96: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

82 5 Concluding discussions

The ethical act of balancing

ALARP serves as a textbook example of balancing principal lines of reasoning,although the practical exclusion of the discourse-stand was questioned in section4.4. Since equity and cost-benefit trade-offs according to Douglas (1985) andFischhoff et al. (1981) are central determinants of risk acceptability, ALARPpossesses a unique advantage in capturing both utility and justice.

MEM and GAMAB are based on the single principles of equity and tech-nology. Although both provide an absolute risk limit, they must not be mistakenas equal from an ethical point of view. As MEM requires an upper restriction ofIRPA, it is rooted in the ethics of justice. GAMAB is a technology based crite-rion, not explicitly relating to any ethical stand. Rather, the ethics of GAMABlie implicit in how previous standards are set, which section 4.3 demonstratedto be a fundamental deficiency of all bootstrapping approaches. GAMAB isfurthermore indifferent to equity considerations, as it by definition concernsglobal risk solely. Although the decision maker is free to express GAMAB byIRPA, this must necessarily be based on extensive averaging if the aggregatedrisk is to be reflected. Recalling the advice of Holden (1984), GAMAB is thusinadequate from the view that risk acceptance criteria should reflect both thetotality and distribution of risk. As Ball & Floyd (1998) recommend the useof both individual and social risk metrics, a plausible solution is to combinethe IRPA-based MEM-criterion with FAR or FN-criterion lines deduced fromGAMAB. The reader should beware that such a symbiosis completely disre-gards the ethics of utility, which is properly accounted for only in the frameworkof ALARP. And regardless of approach, the dual requirement of consideringboth societal and individual risk likely gives rise to ethical dilemmas. By ex-ample, Vatn (2009) leaves open whether it is ethically sound to reduce societalrisk at the expense of increased IRPA to those fighting a hazardous event.

Who should set the criteria?

Acknowledging the ethical dilemmas of risk acceptance criteria, the moral andpolitical question of who shall set risk acceptance criteria must necessarilyfollow. In Norway, it is up to the operators to define the criteria. Accordingto Aven (2007), this creates an ethical problem, as the regulators necessarilyhave a broader societal perspective than the industry, whose primarily goalis profit. Ball & Floyd (1998) similarly note that few duty holders are ableto deal with complex policy issues of acceptable risk, while stressing that theenterprises play an important role in accounting for non-public concerns. Owingto this, regulatory authorities are claimed an important role at least in providingguidance. While the UK’s HSE offers extensive advice on the accomplishmentof ALARP, Norwegian authorities provide few guidelines on the formulation ofrisk acceptance criteria. An unfortunate effect is that it is almost impossible for

Page 97: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

5.2 Do risk acceptance criteria promote good decisions? 83

politicians to reject a company’s risk acceptance criteria on principal arguments,as discussed by Vatn (2009) in light of the new LNG-facility at Risavika,Norway. What is more serious, is that the criteria are developed for singleplants only, possibly without consideration of the aggregated risk from thetotality of enterprises. If the criteria are not in accord with the overall safetytarget of society, failure to take a holistic approach may according to Skjonget al. (2007) yield disproportional expenditures and excessive levels of globalrisk.

Transparency and stakeholder involvement

Paramount to the question of who shall set the criteria, is how to account forthe opinion of relevant stakeholders. The ongoing debate concerning potentialoil- and gas production in Lofoten in Norway, illustrates how a variety of actorshave interests in large societal decisions. This is eminently an issue of risk com-munication, which lies outside the scope of this report. The interested readermay consult for instance Sjöberg (2003). However, a transparent formulationof risk acceptance criteria is a prerequisite for successful risk communication.Skjong et al. (2007) denotes this ’the accountability principle’, demanding asingle, open and clear process for managing risks affecting the public. Accord-ing to Skjong et al., the principle favors quantitative risk acceptance criteriaand objective assessments. With reference to the previous discussion, the lat-ter requirement is unfortunate. What is more, claims of objective assessmentspresupposes strict separation of facts and value judgments, which according toFischhoff et al. (1981) is Utopia. Since the decision rules are explicitly stated,risk acceptance criteria as such will on the positive side render a transpar-ent decision process. But, whether these are known to the general public isyet another problem. MEM stands out as the most transparent, being the onlyprinciple prescribing what value of IRPA to employ. This is also true for thepractical interpretation of ALARA, as the rigid upper limits of the Dutch gov-ernment are known as the prevailing criteria. Even if trade-off analyses areperformed, the ALARA-requirement of marginal benefits is relatively easy totrack. GAMAB and ALARP on the other hand, are clouded by the uncleardefinitions of comparable systems and gross disproportionality.

5.2.3 Compliance or continuous strive for risk reduction?

Although section 2.9 made it clear that risk acceptance is not determined byrisk alone, this section puts all considerations but risk aside. While it is sen-sible claim that every company or society seeks the balance of Reason (1997)between safety and productivity, this presupposes that the desirable risk levelmight actually be sought. A basic question therefore needs to be asked; doesthe use of risk acceptance criteria promote risk reduction?

Page 98: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

84 5 Concluding discussions

Given that a company will strive to satisfy its criteria, the resulting risk obvi-ously depends on their stringency. Owing to this, the requirement of GAMABof being at least as good as the best comparable system seems to promoteunprecedented levels of low risk. This stands in contrast to traditional boot-strapping approaches, where risk reduction is encouraged only by means ofpreserving status quo. As an atypical example, the MEM-criterion is relativelystrict, but no effort is required to reduce the risk below an IRPA of 10�5. Sincethe criterion has remained constant through a variety of innovations, it is likelythat the transient assumption of twenty technological systems have weakenedits stringency.

In contrast to the standstill criterion of MEM is ALARP, whose disproportionality-criterion holds a promise of risk that is as low as practicality allows. Aven &Vinnem (2005) clearly favor the ALARP-approach over absolute criteria, on thegrounds that it encourages continuous strive for risk reduction. Crucial to thisargument is the distinction between HSE’s and the Norwegian interpretationof ALARP, as reported in Vinnem et al. (2006)’s study of ALARP-processesin the Norwegian offshore industry. While the focus of HSE is on reachinggood solutions in the ALARP-area, the involvement of Norwegian authoritiesis restricted to verifying compliance with upper limits. According to Aven &Vinnem (2005), minimal impetus is given to operating companies for consider-ing if further risk reduction is achievable. The main emphasis amongst Norwe-gian operators has thus been on satisfying the upper criteria, usually with noor small margins. If ALARP-evaluations are performed, they very often resultin dismissal of possible improvements. This yields an important conclusion,namely that equally important as the theoretical formulation of risk acceptancecriteria, is how these are applied in the industry and followed up by authorities.

5.2.4 One accepts options, not risks

A subjective interpretation of probability does not diminish the credibility ofrisk acceptance criteria per se. Unfortunately, this recognition is minor to thefundamental problem of whether acceptable risk is expressible in the form of anobjective criterion. The words of Fischhoff et al. (1981) are inescapable; riskis never acceptable in an absolute sense. Rather, risk acceptance is a matterof trade-offs, unique to a particular set of options in a given context. Sincethe inextricable question of whether acceptable levels of risk exist resides inthe realm of philosophy, the obstinacy of Fischhoff and his coworkers will notbe challenged in this thesis. What can be questioned, is to what extent riskacceptance criteria reflect that risk acceptance is a trade-off problem.

Comparing the different approaches is in this respect a grateful task, asALARP is the only approach not only allowing for, but also demanding trade-off analyses. Neither GAMAB, nor MEM possesses this quality, as both pre-scribe risk as the single attribute of importance. Due to the stringency of cri-

Page 99: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

5.3 What we really are looking for 85

teria following comparison with best practice, this is particularly problematicfor GAMAB. One of its main criticisms follows from this deficiency, i.e. thatunrealistic safety objectives may hinder the introduction of a cost-efficient tech-nology or product. Also the precautionary principle suffers under this claim.But, as EU (2000) requires an examination of the potential benefits and costsof action/non action, the charges of Balzano & Sheppard (2002) are wrongfullyattributed to principally lacking trade-off considerations.

Fischhoff et al. (1981) conclude that formal analysis is superior to boot-strapping and expert judgment for situations of complex, technological risk.If their groundbreaking contribution had been written subsequent to the orig-inal TOR-report (HSE, 1992), it would probably have included a commenda-tory chapter on the ALARP-principle. According to Aven et al. (2006), HSE’sALARP-approach is unique as it offers conditional rather than absolute accep-tance criteria, tailored to the risk, costs and benefits of a specific situation. Assuch, the principle captures risk acceptability in a balanced consideration of thevarious benefits and burdens of an activity. Although bootstrapping approachesimplicitly reflect this balance, these are trade-offs of past acceptability. Thecriteria are thus conditional only on the past, unable to reflect more than oneaspect of risk acceptance, i.e. the severity of previously accepted consequences.This uniform focus necessarily restricts political and managerial flexibility ofthe future.

Aven and his coworkers advice restricting absolute criteria to lower levelfunctional requirements, e.g. SIL. However, it is in the opinion of this authorthat lower level criteria presupposes compliance with some overall acceptancecriterion. Vatn (1998) offers a sensible compromise, suggesting that in orderto avoid sub optimization, one should be restrictive with the number of riskacceptance criteria. The normative issues should instead be expressed in termsof value trade-offs for overall optimization.

5.3 What we really are looking for

According to Breugel (1998), the first to ask for in discussions about acceptablerisk is what we are really looking for. Paradoxically, a simple answer to thisquestion is impossible, as it depends on the particular aspects we are interestedin. Intertwined in risk acceptability are multidisciplinary problems of how safeis safe enough, how stable is stable enough, what level of economic growthto seek and the distribution of prosperity and global unbalance. On top ofthis comes the question of which authority, if any, is qualified to define whatrepresents ’enough’.

Page 100: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

86 5 Concluding discussions

5.3.1 Overall conclusions and recommendations for further work

What we really are looking for in this study, is to create a sound basis for theformulation of risk acceptance criteria. This final chapter has raised a set ofconceptual questions regarding the goodness of risk acceptance criteria. Thevarious approaches to setting risk acceptance criteria differ with respect toconsistency, practicality and ethics, in the ability to encourage risk reductionand to reflect risk acceptance. Furthermore, the different metrics by which riskacceptance criteria are expressed, implicitly or explicitly affect how these issuesare resolved. Notwithstanding that conceptual problems jeopardize the sounddecision support of risk acceptance criteria, valuable insights are offered totheir formulation. Equally important is it that these are known to the decisionmaker. If users of risk acceptance criteria are unaware of their limitationsand underlying assumptions, there is little point in perfecting the procedure offormulation. Striving for a sound formulation may even yield negative effects,if the decision maker is convinced that the criteria provide a perfect term ofreference. From that it follows that risk acceptance criteria offer sound decisionsupport, but only if their authors and users have a comprehensive understandingof the applied metrics and approaches. Owing to this, practitioners are advisedto interpret risk acceptance criteria as guiding benchmarks, rather than rigidrepresentations of an ideal truth.

The discussions demonstrate that risk acceptance criteria provide no perfectterm of reference. As such, the thesis offers a valuable point of departurefor the practitioner, if only by challenging his one discretion. Moreover, theexamination shows that risk acceptance criteria by no means is a digested issue.As proven in the plentiful studies of Aven and his coworkers in Stavanger, issuesremain unresolved on the practical implementation of risk acceptance criteriaon the continental shelf. This calls for further research on the dialectic role ofindustries and government in formulating and complying with risk acceptancecriteria. Finally, the complex issue of environmental damage is omitted from thestudy. Although environmental acceptance criteria are required in PSA (2001),current approaches are theoretically and practically underdeveloped. This urgestheoretical maturing and academic debate, on how environmental consequencesmay be adequately included in the acceptable triplet of risk.

Page 101: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

References

Abrahamsen, E. & Aven, T. (2008). On the consistency of risk acceptancecriteria with normative theories for decision making. Reliability Engineeringand System Safety, 93, 1906–1910.

Adams, J. (2003). Risk and Morality. University of Toronto Press Incorporated,Toronto Buffalo London.

Ale, B. (2005). Tolerable or acceptable: A comparison of risk regulation in theUnited Kingdom and the Netherlands. Risk Analysis, 25, 231–241.

Ale, B., Aven, T., & Jongejan, R. (2009). Review and discussion of basicconcepts and principles in integrated risk managegent. In Reliability, Riskand Safety: Theory and Applications. Proceedings from ESREL 2009.

Arbeidstilsynet (2009). Døde etter næring. Technical report, Arbeidstilsynethttp://www.arbeidstilsynet.no.

Australian Safety and Compensation Council (2008). The health of nations: Thevalue of a statistical life. Technical report, Australian Government, AustralianSafety and Compensation Council.

Aven, T. (2003). Foundations of risk analysis. Chichester: Wiley.Aven, T. (2007). On the ethical justification for the use of risk acceptance

criteria. Risk Analysis, 27, 303–312.Aven, T. (2009). Safety is the antonym of risk for some perspectives of risks.

Safety Science, 47, 925–930.Aven, T. & Vinnem, J. (2005). On the use of risk acceptance criteria in the

offshore oil and gas industry. Reliability Engineering and System Safety, 90,15–24.

Aven, T., Vinnem, J., & Vollen, F. (2006). Perspectives on risk acceptancecriteria and management for offshore applications- application to a develop-ment project. International Journal of Materials and Structural Reliability,4, 15–25.

Baker, G. E., Priest, S., Tebo, P. V., Baker, J. A. I., Rosenthal, I., Bowman,F. L., Hendershot, D., Leveson, L., Wilson, D., Gorton, S., & Wiegmann,

Page 102: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

88 References

D. (2007). The report of the BP U.S Refineries Independent Safety ReviewPanel. Technical report, the BP U.S Refineries Independent Safety ReviewPanel.

Ball, D. & Floyd, P. (1998). Societal risks, Final report. Technical report, TheHealth and Safety Executive.

Balzano, Q. & Sheppard, A. (2002). The influence of the precautionary princi-ple on science-based decision-making: Questionable applications to risks ofradiofrequency fields. Journal of Risk Research, 5, 351–369.

Barry, B., de Wilde, J., & Waever, O. (1998). Security: A New Framework forAnalysis. Boulder, London.

Belt, H. d. (2003). Debating the precautionary principle: "guilty until proveninnocent" or " inncocent until proven guilty"? Plant Physiology, 132, 1122–1126.

Bottelberghs, P. (2000). Risk analysis and safety policy developments in thenetherlands. Journal of Hazardous Materials, 71, 59–84.

Bowles, D. (2003). Alarp evaluation: Using cost effectiveness and dispropor-tionality to justify risk reduction. In ANCOLD 2003 Conference on Dams.

BP (2006). Guidance on practices for layer of protection analysis (LOPA).Technical report, British Petroleum procedure: Engineering Technical Prac-tice (ETP) GP 48-03.

Breakwell (2007). The psychology of risk. Cambridge University Press, Cam-bridge.

Breugel, K. v. (1998). How to deal with and judge the numerical results of riskanalysis. Computers and Structures, 67, 159–164.

Campbell, S. (2005). Determining overall risk. Journal of Risk Research, 8,569–581.

Camus, A. (1942). The Stranger. Random House Inc, New York.Carr, S. (2002). Ethical and value-based aspects of the European Commision’s

precautionary principle. Journal of Aggriculture and Environmental Ethics,15, 31–38.

Chevreau, F., Wybo, J., & Cauchois, D. (2006). Organizing learning processeson risks by using the bow-tie representation. Journal of Hazardous Materials,130, 276–283.

De Finetti, B. (1974). Theory of probability. Volume 1. Wiley and Sons.DeFur, P. & Kaszuba, M. (2002). Implementing the precautionary principle.

The Science of the Total Environment, 288, 155–165.Ditlevsen, O. (2003). Decision modeling and acceptance criteria. Structural

Safety, 25, 165–191.DNV (2008). Phast. DNV software.Douglas, M. (1985). Risk acceptability according to the social sciences. Rout-

ledge, London.Elvebakk, B. (2007). Vision zero: Remaking road safety. Mobilities, 2, 425–

441.

Page 103: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

References 89

Elvik, R., Kolbenstvedt, M., Elvebakk, B., Hervik, A., & Brin, K. (2009). Costsand benefits to sweden of swedish road safety research. Accident Analysisand Prevention, 41, 387–392.

EMS (2001). LUL QRA- London Underground Limited Quantified Risk As-sessment. Update 2001. Technical report, Safety Quality and EnvironmentalDepartment of London Underground.

EN 50126 (1999). Railway applications- The specification and demonstra-tion of reliability , availability, maintainability and safety (RAMS) 50126.Euopean Norm, Brussels.

EU (2000). Communication from the commision on the precautionary princi-ple(COMM). Technical report, Commission of the European communities ,Brussels.

EU (2006). Council Directive 2006/42/EC of 17 May 2006 on machinery.Offical Journal of the European Communities,L 157/24.

Evans, A. & Verlander, N. (1997). What is wrong with criterion FN-lines forjudging the tolerability of risk? Risk Analysis, 17, 157–168.

Fischhoff, B. (1994). Acceptable risk: A conceptual proposal. Risk: Health,Safety and Environment, 1, 1–28.

Fischhoff, B., Lichtenstein, S., Slovic, P., Derby, S., & Keeney, R. (1981).Acceptable risk. Cambridge Unversity Press, New York.

French, S., Bedford, T., & Atherton, E. (2005). Supporting ALARP decisionmaking by cost benefit analysis and multiattribute utility theory. Journal ofRisk Research, 8, 207–223.

Garland, D. (2003). Risk and morality. University of Toronto Press Incorpo-rated, Toronto Buffalo London.

Hammit, J. (2000). Valuing mortality risk: Theory and practice. EnvironmentalScience and Technology, 34, 1396–1400.

Hartford, D. (2009). Legal framework considerations in the development ofrisk aceptance criteria. Structural Safety, 31, 118–123.

Holden, P. (1984). Difficulties in formulating risk criteria. Journal of Occupa-tional Accidents, 6, 241–251.

Holton, G. (2004). Defining risk. Financial Analysts Journal, 60, 19–25.Hovden, J. (1998). Ethics and safety: m̈ortalq̈uestions for safety management.

In Paper for Safety in Action, Melbourne 1998.Hovden, J. (2003). Theory formations related to the r̈isk society.̈ In NoFS XV

2003, Karlstad, Sweden.HSE. ALARP ät a glance.̈ Technical report, The Health and Safety Executivehttp://www.hse.gov.uk/risk/theory/alarpglance.htm.

HSE (1992). The tolerability of risk from nuclear power stations. Technicalreport, HMSO, London.

HSE (2001a). Principles and guidelines to assist HSE in its judgments thatduty-holders have reduced risks as low as reasonably practicable. Tech-

Page 104: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

90 References

nical report, The Health and Safety Executive http://www.hse.gov.uk/risk/theory/alarp1.htm.

HSE (2001b). Reducing risks, protecting people; HSE’s decision-making pro-cess. Technical report, HMSO, Norwich.

HSE (2002). Toxic substances bulletin, issue 47. Technical re-port, The Health and Safety Executive http://www.hse.gov.uk/toxicsubstances/issue47.htm.

HSE (2003a). Assessing compliance with the law in individual cases and theuse of good practice. Technical report, The Health and Safety Executivehttp://www.hse.gov.uk/risk/theory/alarp2.htm.

HSE (2003b). Good practice and pitfalls in risk assessment. Technical report,Health and Safety Executive http://www.hse.gov.uk/research/rrhtm/rr151.htm.

HSE (2003c). Policy and guidance on reducing risks as low as reasonablypracticable in design. Technical report, The Health and Safety Executivehttp://www.hse.gov.uk/risk/theory/alarp3.htm.

HSE (2004). Guidance on ’as low as reasonably practicable’ (ALARP) de-cisions in control of major accident hazards (COMAH). Technical report,The Health and Safety Executive http://www.hse.gov.uk/comah/circular/perm12.htm.

HSE (2008). HSE principles for cost benefit analysis (CBA) in support ofALARP decisions. Technical report, Health and Safety executive http://www.hse.gov.uk/risk/theory/alarpcba.htm.

HSE (2009). Societal risk: Initial briefing to societal risk technical advisorygroup. Technical report, The Health and Safety Executive http://www.hse.gov.uk/research/rrpdf/rr703.pdf.

IEC 61508 (1998). Functional safety of electrical/electronic/programmableelectronic safety-related systems. Part4. International Electrotechnical com-mission, Geneva.

ISO/IEC Guide 51 (1999). Safety aspects- guidelines for their inclusion instandards. International Organization for Standardizationthe InternationalElectrotechnical Commision.

Johannesson, M., Jönsson, B., & Karlsson, G. (1996). Outcome measurementin economic evaluation. Health Economics, 5, 279–296.

Jongejan, R. (2008). How safe is safe enough? The government’s response toindustrial and flood risks. PhD thesis, Technische Universiteit Delft.

Jongejan, R., Jonkman, S., & Maaskant, B. (2009). The potential use of indi-vidual and societal risk criteria within the dutch flood safety policy (part 1):Basic principles. In Reliability, Risk and Safety: Theory and applications,proceedings from ESREL 2009.

Kaplan, S. & Garrick, J. (1981). On the quantitative definition of risk. RiskAnalysis, 1, 11–27.

Page 105: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

References 91

Kjellén, U. (2000). Prevention of accidents through experience feedback. Taylorand Francis, London.

Kjellén, U. & Sklet, S. (1995). Integrating analyses of the risk of occupationalaccidents into the design process. Part 1; a review of types of acceptancecriteria and risk analysis methods. Safety Science, 18, 215–227.

Klinke, A. & Renn, O. (2002). A new approach to risk evaluation and man-agement: Risk-based, precaution-based and discourse-based strategies. RiskAnalysis, 22, 1071–1094.

Lind, N. (2002a). Social and economic criteria of acceptable risk. ReliabilityEngineering and System Safety, 78, 21–25.

Lind, N. (2002b). Time effects in criteria for acceptable risk. ReliabilityEngineering and System Safety, 78, 27–31.

Linnerooth-Bayer, J. (1993). The social mismanagement of risk? risk aversionand economic rationality. Technical report, International Institute for AppliedSystens AnalysisIIASA.

Lyster, R. & Coonan, E. (2009). The precautionary principle: A thrill ride onthe roller coaster of energy and climate law. RECIEL, 18, 38–49.

Marszal, E. (2001). Tolerable risk guidelines. ISE Transactions, 40, 391–399.Martz, H. & Waller, R. (1988). On the meaning of probability. Reliability

Engineering and System Safety, 23, 299–304.Melchers, R. (2001). On the ALARP approach to risk management. Reliability

Engineering and System Safety, 71, 201–208.Möller, N., Hansson, O., & Peterson, M. (2006). Safety is more than the

antonym of risk. Journal of Applied Philosophy, 23, 419–432.NASA (2002). Probabilistic risk assessment procedures guide for NASA man-

agers and practitioners. NASA Office of Safety and Mission Assurance,Washington D.C.

Nordland, O. (1999). A discussion of risk tolerance principles. The HazardsForum Newsletter issue, 27, 2–6.

Nordland, O. (2001). When is risk acceptable? In Presentations at 19th In-ternational System Safety Conference, Huntsville, Alabama, USA September2001.

NORSOK Z-013N (2001). Risko- og beredskapsanalyse. Standard Norge, Oslo.NS 5814 (2008). Krav til risikovurderinger. Standard Norge, Oslo.Næss, A. (1985). Filosofiske betraktninger om lykke og ulykke. In NOFS-85,

SINTEF.NSW (2008). Hazardous industry planning advisory paper (HIPAP). No 4-

Risk criteria for land use safety planning (draft). Technical report, NSWGovernment, Department of Planning, Sydney, Australia.

OLF 070 (2004). Application of IEC 61508 and IEC 61511 in the norwegianpetroleumindustry. OLF.

Papoulis, A. (1964). The meaning of probability. IEE Transactions on educa-tion, 7, 45–51.

Page 106: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

92 References

Pasman, H. & Vrijling, J. (2003). Social risk assessment of large technicalsystems. Human Factors and Ergonomics in Manufacturing, 13, 305–316.

Phenix, C. & Treder, M. (2004). Applying the Precautionary Principle toNanotechnology. Center Responsible for Nanotechnology http://www.crnano.org/precautionary.htm.

PSA (2001). Regulations relating to health, environment and safety in thepertroleum activities (the framework regulations) 2001. Petroleum SafetyAuthority Norway (PSA) and Norwegian Pollution Control Authority (SFT)and Norwegian Social and Health Directorate (NSHD).

Rackwitz, R. (2004). Optimal and acceptable technological facilities involvingrisks. Risk Analysis, 24, 675–695.

Rausand, M. & Utne, I. (2009). Risikoanalyse- teori og metoder. TapirAkademisk Forlag, Trondheim.

Reason, J. (1997). Managing the risks of organizational accidents. AshgatePublishing Limited, Hampshire.

Renn, O. (2008). Risk Governance. Coping with uncertainty in a complexworld. Earthscan, London.

Salter, M. (2008). Imagning numbers: Risk, quantification and aviation security.Security dialogue, 39, 243–266.

Schäbe, H. (2004). Different approaches for determination of tolerable hazardrates. In ESREL 2004 Conference proceedings.

Schofield, S. (1998). Offshore QRA and the ALARP principle. ReliabilityEngineering and System Safety, 61, 31–37.

Sjöberg, B. (2003). Introduction to risk communication. current trends in riskcommunication: theory and practice. Technical report, Directorate for CivilDefence and Emergency Planning, Oslo.

Skjong, R., Vanem, E., & Endresen, . (2007). Risk evaluation criteria. Technicalreport, SAFEDOR -D-4.5.2 DNV.

Slovic, P. (1987). Perception of risk. Science, 236, 280–285.SSB (2009). Dødsfall etter kjønn, alder og underliggende dødsårsak. Hele lan-

det. 2007. Statistisk Sentralbyrå http://www.ssb.no/dodsarsak/.Starr, C. (1969). Social benefit versus technological risk. Science, 165, 1232–

1238.Starr, C. & Whipple, C. (1980). Risks of risk decisions. Science, 208, 1114–

1119.Stuphorn, J. (2003). Iterative decomposition of a communication-bus system

using ontological analysis. PhD thesis, Universität Bielefeld.Tannert, C., Elvers, H., & Jandrig, B. (2007). The ethics of uncetainty. In the

light of possible dangers, research becomes a moral duty. EMBO reports, 8,892–896.

Teknisk Ukeblad (2009). Krever nye tall fra havforskerne. Teknisk Ukeblad,35, 10–11.

Page 107: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

References 93

Trouwborst, A. (2007). The precautionary principle in general internationallaw: Combating the babylonian confusion. RECIEL, 16, 185–195.

Trung, L. (2000). The GAME, MEM and ALARP principles of safety (abridgedversion). Recherche Transports Sécurité, 68, 63–65.

Tversky, A. & Kahneman, D. (1974). Jugment under uncertainty: Heuristicsand biases. Science, 185, 1124–1131.

United Nations (1992). Report of the United Nations conference on environ-ment and devleopment, Rio Declaration on environment and development.Technical report, United Nations, New York.

US Presidential/Congressional Commission on Risk Assessment and RiskManagement (1997). Framework for environmental health risk manage-ment. final report, volum 1. Technical report, US Presidential/CongressionalCommission on Risk Assessment and Risk Management http://www.riskworld.com/Nreports/1997/risk-rpt/pdf/EPAJAN.PDF.

Vatn, J. (1998). A discussion of the acceptable risk problem. Reliability Engi-neering and System Safety, 61, 11–19.

Vatn, J. (2009). Issues related to localization of an LNG facility. In Reliability,Risk and Safety: Theory and Applications. Proceedings from ESREL 2009.

Vaurio, J. (1990). On the meaning of probability and frequency. ReliabilityEngineering and System Safety, 28, 121–130.

Vinnem, J. (2007). Offshore risk assessment. Principles, modeling and appli-cation of QRA studies. Springer, London.

Vinnem, J., Haugen, S., & Vollen, F.and Grefstad, J. (2006). ALARP-prosesser.utredning for Petroleumstilsynet. Technical report, Petroleumstilsynet.

Vrijling, J., Gelder, van P.H.A.J.M.and Goossens, L., Voortman, H., & Pandey,M. (2004). A framework for risk criteria for critical infrastructures: Funda-mentals and case studies in the netherlands. Journal of Risk Research, 7,569–579.

Vrijling, J., Hengel, W., & Houben, R. (1998). Acceptable risk as a basis fordesign. Reliability Engineering and System Safety, 59, 141–150.

Walker, T. (2001). Tolerability of risk. Its use in the nuclear regulation in theUK. Technical report, HSE.

Watson, S. (1994). The meaning of probability in probabilistic safety analysis.Reliability Engineering and System Safety, 45, 261–269.

Webster (1978). Webster’s Encyclopedic Unabridged Dictionary of the EnglishLanguage. Random House, New York.

Wilson, K., Leonard, B., Wright, R., Graham, I., Moffet, J., Pluscauskas, M.,& Wilson, M. (2006). Application of the precautionary principle by seniorpolicy officials: Results of a canadian survey. Risk Analysis, 26, 981–988.

Woodruff, J. (2005). Consequence and likelihood in risk estimation: A matterof balance in UK health and safety risk assessment practice. Safety Science,43, 345–353.

Page 108: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

94 References

Wu, J. & Apostolakis, G.E .and Okrent, D. (1990). Uncertainties in systemanalysis. Probabilistic versus nonprobabilistic theories. Reliability Engineer-ing and System Safety, 30, 163–181.

Yellmann, T. & Murray, T. (1995). Comment on the meaning of probability inprobabilistic safety analysis. Reliability Engineering and System Safety, 49,201–205.

Page 109: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

Appendix

Abbreviations and acronyms

ALARA As low as reasonably achievableALARP As low as reasonably practicableAIR Average individual riskCBA Cost-benefit analysisFAR Fatal accident rateDDT dichlorodiphenyltrichloroethane,a banned pesticideFN-curve Frequency/number of fatalities-curveGAMAB Globalement aussi bonHSE Health and Safety Executive, UKIR Individual riskKPI Key performance indicatorIRPA Potential loss of lifeLIRA Localized individual riskMEM Minimum endogenous mortalityPFD Probability of failure on demandPLL Potential loss of lifePSA Petroleum Safety Authority, NorwayQUALY Quality adjusted life yearsRAC Risk acceptance criteriaRPN Risk priority numberSFAIR Safe so far as is reasonably practicableSIL Safety integrity levelSIS Safety instrumented systemVPF Value of preventing a fatalityVSL Value of a statistical life

Page 110: FOUNDATIONS AND FALLACIES OF RISK ACCEPTANCE CRITERIA …motet/risque/documents-2012-2013/4... · Foundations and Fallacies of Risk Acceptance ... Foundations and Fallacies of Risk

The ROSS activities at NTNU are supported by the the insurance company TrygVesta. The annual conference ”Sikkerhetsdagene” is jointly arranged by TrygVesta and NTNU.

R SSO

Further information about the reliability, safety, and security activities at NTNU may be found on the Web address: http://www.ntnu.no/ross

ISBN: 978-82-7706-228-1