The status and prospects of risk assessment

13

Click here to load reader

Transcript of The status and prospects of risk assessment

Page 1: The status and prospects of risk assessment

The Status and Prospects of Risk Assessment

IAN BURTON* and RONALD PUS~CHAK,~ Toronto, Canada

Abstract: After discussing the changing nature of perceived risk problems, the status of risk assessment is described in relation to its origins and in particular to its roots in the environmental impact statement process. The nature of risk, its component elements and the manner in which existing concepts of risk have been reflected in risk assessment methods are described. The paper considers two emerging schools of thought in current risk assessment studies: one that calIs for more accurate measures of risk and increasingly comprehensive event prediction models to determine risk acceptability and another which argues that the accept- ance of risk is less dependent on the accuracy of risk analyses than it is on the nature of the decision-making process and in particular on whether compensation is provided for those bearing a disproportionate share of risk.

The Risk Problem

It has always been necessary to make collective and public decisions that involve a degree of risk to society as a whole and an unequal allocation of the risk among different groups. The risk of polyvinyl chloride manufacture is accepted and falls most heavily on those who work in the industry. Many locational decisions, such as the siting of nuclear waste facilities (PUSHCHAK and BURTON, f983) or a liquid natural gas terminal (LATHROP, 1980), arouse fear and opposition among those who per- ceive themselves to be at greater risk. The transport of hazardous goods by rail through urban areas (BURTON and POST, 1983; BURTON et al., 1980) has recently generated much concern among those living adjacent to railway lines and indiffe- rence among those who receive only benefits (DOOLEY and BURTON, 1983).

Decisions of this kind have been made routinely by public and private sector industrial enterprises, and by governments at all levels. with relatively little public opposition either to the level of risk or its

*Institute for Environmental Studies and Department of Geography, University of Toronto. iDepartment of Geography, Erindafe Coifege, Uni- versity of Toronto.

unequal allocation. In many developing and newly industrialized countries this attitude still prevails and political spokesmen from such countries have been heard to express the wish to have more risks together with their presumed benefits.

In the Western industrialized countries, including Japan, the level of imposed risk and its distribution has been subject to growing public attention, and in some cases has sparked vigorous and highly vocal opposition. In Ontario a recent government attempt to site a liquid waste facility on previously public- owned land in a rural area - the South Cayuga site - has been completely reversed and resulted in several years’ delay and enormous expense because of the alfeged risk to the surrounding poputation (WELLER and JACKSON, 1982). Incomplete motorways, cancelled contracts for the construction of nuclear power stations and a litany of similar scars of battle in almost every community in the Western world bear ample testimony to an era of confrontation in which the concept of risk almost always plays a prominent role.

There are two explanations for the increased atten- tion paid to risk-related decisions. First, the nature of risk has changed substantially, warranting increased public concern, and second, the public has generally become more involved in environ-

463

Page 2: The status and prospects of risk assessment

46-t

mental quality decisions, with risk-related issues coming to the centre of controversy as public participation increases.

The current interest in risk and in the practice of risk assessment has developed as a result of both influences; the nature of risk-generating technolo- gies has changed and at the same time the public has become increasingly concerned about new tech- nological hazards and the inequitable distribution of risk that seems to be the inevitable result of recent decisions. As KASPERSON and KASPERSON (1983, p. 135) argue, the prevailing assumption that society routinely accepts risk as a cost of obtaining goods and service is now being challenged. The emerging view is that “most risks are likely imposed on imperfectly informed risk bearers who often lack the freedom to accept or reject the risk”.

A Definition

By the word ‘risk’ is meant the combination of the sum of the probabilities of risk events (E) and their consequences. Thus a high risk is one in which the average expected losses over a specific period of time, such as 1 year, are judged to be great. This idea can be simply expressed as:

risk = C P(E) x consequences.

There are many kinds of risks; the social risk of being kidnapped or raped, the economic risk of losing one’s job or of business collapse, the political risk of losing an election or an armed conflict and the technological risk of a transportation accident or an industrial explosion. We also speak of environ- mental or ecological risks - toxic contaminants in food and water supplies and the risk of flood or

earthquake.

In the risk assessment field we are beginning to think of all these diverse risks as a single set of phenomena, perhaps because of the high public expectation that governments must provide for the security of all citizens in allocating risks for any activity. Commonsense tells us that these risks involve incommensurables. Nevertheless, the sci- ence and art of risk assessment attempts to bring order and rationality into our thinking about risk,

and to create the measures and procedures needed to adequately allocate risks for a truly informed and educated public.

GeoforumNolume 15 Number 311983

For individuals, probably the most serious conse- quence of risk is death. The probability of death is, therefore, one measure of risk that cuts across the diverse patterns of experience. Moreover, since actuarial statistics are readily available. much of the data about risk assessment centres on the probabil- ity of death from various causes or activities. This is, admittedly, an oversimplification, but it provides a starting point and recognizes death as one of the most important consequences, although for some people, some of the time, there are fates worse than death.

The Need for Risk Assessment

Risk assessment would be necessary if the burden of risk borne by society were increasing. While present perceptions suggest that risks are indeed increasing, the perceptions are not supported by available data. In this sense life is not more risky. Expectation of life has increased dramatically during the century. On average we live longer. Therefore the probabil- ity of death at any one time is less and longer life means safer life (MILLER and RIDGEWAY, 1983).

In another important, if less obvious, sense we really do not know whether risks are increasing. The nature of risk in public decisions has changed as technological developments have created risks that are generally less likely to occur but with conse- quences that are potentially more costly in loss of life or property damage. The scale of risk- generating facilities has increased adding to the magnitude of potential consequences. In addition, recent decisions have increased risk by tending to ‘cluster’ risk-generating facilities in one location to take advantage of economies of scale and to avoid public opposition in new locations. Thus while the aggregate statistical probabilities of death have declined as measured by experience, we now face a range of risks which can only be stated as possi- bilities, not as probabilities. We believe these risks to exist but, never having experienced their con- sequences, we find the assignment of probabilities somewhat difficult.

The most important thing about risk in contempor- ary society, therefore, is that its character has changed. We can now identify many risks without having any experience of risk-events. There is, therefore, no readily available method of statistical inference, based on samples of limited observa-

Page 3: The status and prospects of risk assessment

GeoforumNolume 15 Number 311981

tions. which permit estimates of probability to be made.

There are several examples of new risks that have been created and identified in the post-war years about which we lack statistical information. It is possible that a liquified natural gas (LNG) tanker will be involved in an accident in the habour of a large city, leak its cargo and cause a catastrophic ‘bleve’ (boiling liquid/expanding vapour explosion). It is possible that one or more nuclear reactors will experience core melt-down, resulting in the massive release of fission products. It is possible that deple- tion of the ozone layer or the build-up of carbon dioxide in the atmosphere will result in substantial changes in weather and climate or in levels of radi- ation. It is possible that one of the newly synthesized chemicals that has been created will be discovered to have latent effects, teratogenic, mutagenic or carcinogenic, when passed on to human populations through ecological food chains.

All these risks might cause considerable numbers of deaths and, thus, even affect the overall expectation of life. In the face of such possibilities it is clear that risks have evolved towards greater uncertainty. Compared with 20 years ago we know more, and there is more that we know is unknown. The growth of risk assessment can be seen as a response at both the intellectual and organizational levels of society to grapple more effectively with this growth of uncertainty.

And so, in the 1970s the concept of risk escaped from the protective custody of statisticians, actur- aries, engineers and social psychologists to become one of the major buzz-words of the decade. Risk issues rose to a position of prominence on the agenda of government agencies, private corpora- tions, the media, public interest groups and some members of the academic community. This ground- swell of interest had three effects. First, the concept of risk became a rubric under which several risk issues of a transdisciplinary, transcientific nature could be articulated. Second, Western industrial societies realized that they were faced with a new problem of a complex and intractable sort - the need to manage environmental and technological risk under conditions of great uncertainty. Third, it was assumed that by developing a set of formal concepts and techniques collectively called risk assessmenf, the management of risks could be improved, and be seen to be improved.

465

In the following paragraphs this paper describes something of the origins, the status and the con- sequences of the burgeoning growth of the ‘risk industry’ and risk assessment. To attempt this analysis is to engage in the re-writing of contempor- ary history, and the pitfalls of such an exercise are well known. Nevertheless, the attempt is worth making. Clear directions in risk assessment philos- ophy and method have already developed and in charting potential future directions it will be helpful to have some sense, however imperfect, of where we have come from and what our intentions have been.

Origins and Development of Risk Assessment

Risk as a public issue emerged into prominence out of the environmental concern of the late 1960s. That concern was focused on two types of environmental threat: the threat of increasing and pervasive contamination of the biosphere by a growing num- ber of hazardous substances and the threat of catas- tropic environmental events. The threat of perva- sive contamination resulted from the introduction of a number of substances into the biosphere by the expanding post-war chemical industry and the wide- spread use of new chemical compounds, especially in agriculture. Initially the evidence of environmen- tal degradation was confined to fish kills and declines in bird populations, particularly birds of prey. However, as SELIKOFF (1983, p. 71) indi- cates, geographical studies of pathology in the 1960s began to report differing rates of human disease in different locations. This suggested that environmen- tal conditions were contributing to disease rates in some locations. Occupational hazard studies which reported the health effects of industrial contami- nants for specific industries provided some confirmation for this suggestion.

At about the same time major advances were made in the measurement and detection techniques for tracing minute quantities of contaminants in food, air, water and the tissue of living organisms. Once alerted to possible dangers, scientists began a hunt for additional disease-related contaminants and toxic substances (CLARKE, 1980). While it was not clear which of the potential risks that had been identified were serious concerns, the public percep- tion of a pervasive environmental risk had been firmly established and methods to measure the severity of those risks and assess their acceptability were put in place, an example being the chemical

Page 4: The status and prospects of risk assessment

166

approvals process in the U.S. Environmental Protection Agency (JOHNSON, 1982).

The second type of environmental threat perceived in the 1960s was that of a catastrophic event (LAGADEC, 1982). Major air inversions in the late 1950s and early 1960s large oil spills (the Tor- rey Canyon sinking, the Amoco Cadiz accident and the Santa Barbara blow-out) and the occurrence of minimata disease in Japan attributed to methyl mer- cury contamination had demonstrated that catas- trophic environmental changes could occur as a result of technological developments or industrial activities. Studies of natural and man-made hazards in the 1960s included estimates of the probability of such extreme events and geographical studies also began to identify the range of human responses to the risk of catastrophic events. One theme in this work began with WHITE’s (1942, 1966) investiga- tions of flood hazards in the United States and was extended to include a wide array of natural and man-made environmental hazards (BURTON et al., 1978). The risk of catastrophic change became a part of the academic as well as the public agenda.

The public pressed legislators for means to prevent catastrophic environmental changes. But it was soon recognized that the prevention of such events could only be achieved by an assessment of impacts on the environment before a decision was taken to approve application of advanced technology, site a hazardous facility or make other changes to the environment. Consequently, the U.S. Congress in the National Environmental Policy Act (NEPA, 1969) mandated that environmental impacts of an action be predicted before decisions were taken and that they be documented in an Environmental Impact Statement (EIS). This established for the first time that all consequences of a risk be pre- dicted.

The EIS requirement was quickly adopted by indi- vidual states, by federal and provincial governments in Canada (PUSHCHAK and WILSON, 1981) and in various forms by other nations (BURTON et al., 1983). The EIS became the model for risk assess- ment in that it assumed the environmental impacts of conventional development decisions were pre- dictable. In deahng with risk-related decisions it was logical to assume that the impacts of an accident in a facility subject to catastrophic failure could be pre- dicted. Consequently, EISs in the 1970s began to include estimates of the probability of a sudden failure and of its consequences.

Geoforumb’olume 15 Number 3i19S-l

In the 1970s risk assessment as a separate activity emerged partly as a result of new risk estimation methods devised by the National Aeronautics and Space Administration to deal with its unique risk estimation problems. The United States space programme was conducting manned space flight in orbiting satellites in the 1960s and preparing for the moon voyage. The risk of failure in the space tech- nology became crucially important and new methods of analysis began to be developed, including fault-tree and event-tree analyses. These techniques were designed by NASA scientists and engineers to detect errors and to model their effects so that designs could be improved and risks reduced.

This work gained little public attention until it was applied in the field of nuclear engineering. In a famous study, RASMUSSEN (1975) applied the methods of fault-tree and event-tree anafyses to the study of the risk of major accidents in nuclear reac- tors. The Rasmussen study was extremely intluen- tial, especially when it first appeared, because it demonstrated, for the first time, the application of risk analysis methods to modern high technoiogy, and claimed to show that the probability of a major accident was very low indeed - in the order of 10mh or, in the case of some accident sequences, as low as lo-“.

Finally, risk assessment was established as an area of study with the appearance in print of writings about risk by innovative and imaginative scientists. Notable among this early literature was a seminal paper entitled ‘Social benefit versus technological risk’ (STARR, 1969) and three books, Of Accept- able Risk: Science and the ~e~ermi~at~on of Sufe~ (LOWRANCE, 1976), An Anatomy of Risk (ROWE, 1977) and Risk Assessment of Environ- mental Hazard (KATES, 1978). In recent years a growing volume of specialized studies have been published and inevitably a new journal entitled Risk Analysis, appeared under the sponsorship of a new scientific society - the Society for Risk Analysis.

Over time, risk assessment has developed as an established field of study and practice. However, there remain significant structural similarities between it and the EIS:

(a) Both are comprehensive in approach. The EIS requires that all impacts (direct and indirect) in all environmental sectors (social, economic and physical) be estimated. Risk assessment is simi-

Page 5: The status and prospects of risk assessment

Geoforum’Volume 15 Number 31193-t

(b)

Cc)

(4

larly comprehensive in that all possible events following a decision must be predicted and all consequences estimated. Given the large num- ber of possible accident sequences and consequ- ences for complex technological systems, there can be no assurance that all event sequences and consequences have been considered.

Both depend on limited methods of preduction. Prediction methods require a reasonable historical record of performance, but for many recently created substances or processes the existing record is too short to compute reliable probability estimates.

Both lack a ‘stopping’ rule. In an EIS there is no rational basis for determining which sectors of the environment should be examined or for deciding that secondary impacts will be estimated while tertiary impacts will be ignored. The same is true for risk assessment. It is not clear which accident causes and sequ- ences should be included and which are too trivial to play a role in an accident chain of events. Catastrophic events have in fact been caused by insignificant actions (an example is the Brown’s Ferry reactor fire that was attributed to a candle flame).

Both lack a well-defined format. Risk assess- ments and EISs vary substantially in content from one application to the next. There is no assurance that two risk assessments done for the same project will have the same contents or reach the same findings.

Given these structural problems, risk assessments to date have not produced reliable or replicable find- ings. BLOKKER’s (1981) review of risk analyses for six industrial enterprises indicates that the reli- ability of estimates of probabilities and consequ- ences of potential accidents has been disappointing. He found that consequences were calculated with an uncertainty of approximately one order of mag- nitude and that probabilities were calculated with uncertainties ranging between one and two orders of magnitude. He noted similar uncertainties in the final risk estimate, expressed in number of expected fatalities per year, of one order of magnitude. This was not sufficiently accurate to reasonably assess risk. LATHROP (1980) found similar variations in the results of three risk assessments done for a proposed LNG terminal in Oxnard, CA. He found that variations in risk estimates were substantial,

467

with the risk calculated by the local municipality

roughly 380 times higher than that arrived at by state or federal risk analyses.

In this way, risk assessment has been developed as a necessary but as yet insufficient means of deciding the acceptability of risks. It was the logical outcome of the environmental concern era, but as SELI- KOFF (1983) argues:

Risk assessment was incompletely prepared for its new prominent role. These deficiencies are much with us still. They are largely structural and not dependent upon the skill or competence of its practitioners

(P. 71).

Risk Assessment Practice

A number of attempts have been made to define the field and give it some order or structure. A common view describes the field of risk analysis as comprised of three subsets: (1) risk identification; (2) risk measurement or estimation; and (3) risk evalu- ation, as shown in Figure 1.

RISK ANALYSIS

RISK RISK RISK IOENTIFICATION MEASUREMENT EVALUATION

OR ESTIMATION

Figure 1. The complements of risk analysis.

It is generally agreed that the field is trans-scientific. That is, it extends beyond the boundaries of the natural sciences and engineering to include the social process of judging safety or determining the acceptability of risks. A more detailed structure of the field (Figure 2) lists different ways of measuring and estimating risk, and addresses the issue of social evaluation in terms of social objectives, public policy criteria and the principal means used for managing risk. While there is no consensus as to the

scope and content of risk assessment, it is now generally agreed that the social, economic and policy sciences have a role to play in risk assess- ment, even among those who prefer to use the term ‘risk analysis’.

Technical controversy

Controversy is still rampant among risk assessors about the practice of their craft. During the early

Page 6: The status and prospects of risk assessment

468 GeoforumIVolume 15 Number 30984

RISK ASSESSMENT

IDENTIFICATION AND ESTIMATION OF RISK RISK EVALUATION

Policy objectives - I\

Considerations in public policy Management means

Experimenting, screening and testing

Accident analysis

Monitoring

Event.tree and fault-tree analysis

Risk avoidance IZerO

Balanced risk ialternative means)

Benefit-risk

Other obiectives

Detection of any r!sk

Non-detection of adverse effect

Best practicable means

Best available means

Modelling Relation to occupational experience

Dose-response relationships Relation to “natural” or- background level

Occupational analogs Relation to overall risk level

Spatial analogs Relation to other iurisdiction’s experience, including international transfers

Common sense. folk-knowledge Cost-effectiveness

Degree of public concern

Public perception

Voluntary cOnSe”wI

Revealed preference

Market mechanism

Expressed preference (referenda, public opinion polls. questionnaire surveys)

Psychological testing

Court judgements

Criteria and standards

Figure 2. The dimensions of risk assessment as a field of study.

growth of interest in risk assessment, it was assumed by many that the application of disciplined thought could bring substantial rationality into an area of chaos and darkness. The work of the international Commission on Radiological Protection (ICRP) on exposure to radiation and the development of the concept of harm commitment, coupled with con- servative extrapolations of observed dose/response relationships, provided grounds for confidence that methods of analysis could be developed for decision-making which would be reassuring to the public.

This was certainly the mood which prevailed when SCOPE (Scientific Committee on Problems of the Environment) established its mid-term project on risk assessment in 1973. It was then the expectation that a state-of-the-art review of risk assessment would help to clarify methods and approaches to be used and help to promote their more widespread use in developing countries as well as in the devel- oped industrial nations. Also during this period a number of risk assessment studies were carried out which later became the ybject of much criticism as well as praise. One thinks in this context of the Rasmussen study, already referred to, and, on a smaller scale, the Inhaber study for the Atomic

Energy Control Board in Canada (INHABER, 1978).

Unfortunately, but perhaps not really surprisingly, the application of disciplined thought did not lead to the blossoming of rational and reasonable methods in the management of risk that had been expected. Rather, the reverse was the case. Some of the scien- tific risk assessors were subjected to levels of social criticism that they had scarcely anticipated. Among the criticisms of the Rasmussen study, for example, was the argument that the study was more designed to estimate risk and to demonstrate that it was very low than to make reactors safer. Critics of the study attacked it on both technical grounds and impugned its motivation. Many were left with the impression that the Rasmussen study was meant as an exercise in public relations to reassure the public about the safety of reactors rather than to make an objective assessment.

Similarly, in Canada the study of the risk of energy production by various alternative technologies con- ducted by Herbert Inhaber for the Automatic Energy Control Board came under severe criticism. The report purported to show, inter ah, that nuclear power could be produced more safely than

Page 7: The status and prospects of risk assessment

GeoforwdVoIume 15 Number 31198-t

power from windmihs. This surprising assertion was based on the assumption that windmills require about 1000 tonnes of material per megawatt-year of output, and it is in the fabrication of this material that deaths are expected. One critic argued that this assumption was incorrect by a factor of 100 (HOLDREN, 1981).

Whatever the merits of the particular argument, a defence of nuclear power on the grounds that it is safer than windmills was not consistent with percep- tions of nuclear power risk and did little for the public credibifity of risk analysis as a field.

Everyone who has been involved in risk analysis can cite examples of apparent absurdities which are extremely difficult to convey to the public as exam- ples of rationality. The test that Ied the U.S. Food and Drug Administration to ban cyclamates was conducted on rats that were fed high doses of both cyclamate and saccharin. Yet cyclamates were ban- ned and at the time saccharin was not. There are now two formuli for red dye or coloring matter availabie, Red 2 and Red 40. Red 2 is banned in the United States but is universally used in Canada. Red 40, on the other hand, is banned in Canada but is widely used in the United States.

It is therefore not surprising that earlier optimism about risk assessment has given way to much more sober judgements about its potential usefulness. Many of the risks which have been identified as possible risks are not of the sort where the appli- cation of disciplined thought and scientific reason will substantially alter public perceptions that pre- sent environmenta risks are high. Risk assessment is a much more difficult and complex process than initially assumed and risk analysts and decision- makers have learned to be much more cautious.

Clearly there are some severe technical difficulties in risk analysis. There are also many impediments in the political process to the adoption of effective risk management strategies that would persist even if all the technical difficulties could be cleared up. That there are evident limits to human reason does not mean that the allocation of scientific manpower and financial resources to risk assessment is wasted. It simply means that the limits should be recognized and that scientific research should be set in a social context which allows for incremental improvement rather than a search for definitive answers.

469

Scientific v pub&c perception of risk

No matter how good our science may be, it will not resolve the major difficulty we face; that is, the great discrepancy between scientific and public preception. The definition of risk that has come to be accepted in the scientific community is as pre- viously noted, that risk equals the probability of an event times its consequence. For the scientist, risk reduces to three components which can be measured or at least estimated in an intendedly rational fashion. The three components are simply, a probabiiity estimate, the period of time or interval to which that probability applies, and the conse- quences as assessed in deaths, injuries or damage. In order to make quantitative estimates of those components it is usually necessary to model the ‘risk system’ from the initiating event or cause through to final outcome. The techniques and methods of investigation developed are, perhaps, best described as risk analysis.

For the public, however, risk is perceived (under- stood subjectively) rather than rationally calculated. The distinction is repeatedly made in the literature between objective (calculated) risk and subjective (perceived) risk (KASPER, 1980; LEE, 1980; SLOVIC ef al., 1981; WHYTE and BURTON, 1982):

G)

(2)

Objective risk is the probability of a future event calculated from statistical data provided by past events. Objective risks can be simple probabilities based on the past frequency of a single event (an oil spill). They can atso be complex calcutations made up of a number of event frequencies to estimate the probability of an event for which no record exists (a loss-of- coolant accident in a nuclear reactor would be the result of a number of separate contributing events, each with its own probability of occur- ring).

Perceived risk is an assessment of the probabil- ity of an event and its consequences arrived at subjectively by individuals. Perceptions are influenced by personal experience, memory of related events, level of knowledge and feelings about the event being considered. LEE (1981) suggests that because individuals share common experiences, and because they attach consider- able credibility to information and assessments provided by others (media, opinion leaders,

Page 8: The status and prospects of risk assessment

170

friends), there is a convergence toward norms of risk perception.

It has been pointed out (SLOVIC et al.. 1979; WHYTE and BURTON. 1982) that public percep- tions of risk for nuclear power or liquid natural gas (LNG) facilities are much higher than objective risk assessments calculated for the same facilities. Generally. there is a tendancy in public perceptions to overestimate the risks of low-frequency events and to underestimate the risks of high-frequency,

common events (automobile travel, disease). As a result, the tendency in the decision-making arena has been to view objective measures of risk as reasonably accurate and valid estimations of actual risk while perceived risks have been thought of as irrational and emotional assessments that are sub- ject to substantial error.

However, SLOVIC ef al. (1979) argues that success in risk management depends on dealing with subjec- tive judgements and perceptions of risk. To find a site for a hazardous waste facility, for example, a substantial degree of public acceptance must be forthcoming and, unavoidably, the subjective perception of risk is the basis of risk acceptance, regardless of the quantified or subjective evaluation (ROWE, 1977).

Recent research (KASPER, 1980; LEE, 1981; FIS- CHHOFF ec al., 1982) has argued that while public perceptions of risk have been substantially higher than objective calculations and subject to systematic error and bias (TVERSKY and KAHNEMAN 1974; COVELLO and MENKES, 1982), such perceptions are seldom irrational. Rather, people’s cognitive skills and access to information in assess- ing risks are limited so they resort to commonly held values, rules of thumb, to reduce complex problems

to simpler forms.

(4

(b)

cc>

In public perceptions, the death of several related people is thought to be worse than the death of several unrelated people.

The apparently ‘rare’ accident (Three Mile Island) creates the perception that the probabil- ity of more such incidents has increased (even though objective risk calculations suggest the probability does not change with the occurrence of a single event).

The sheer dread of catastrophic accidents accounts for much of the seemingly non-

Geoforum/Volume 15 Number 3198-i

rational contradiction in risk-taking (auto- mobile driving compared with nuclear power). Objective risk assessments suggest that the rate of death (catastrophic or slow) should not affect perception, when clearly the death of several people at once is commonly perceived to be worse than the death of the same number at intervals of time.

Such commonly held values substantially shape public perceptions. The literature also suggests that people assess the risk of an uncertain event by rely- ing on a limited set of heuristic devices to reduce the complex task of assessing probabilities to simple judgements (TVERSKY and KAHNEMAN,

1974):

G-4 Representative judgements: lay persons establish the probability of an event by assuming it is representative of another better known event. It might be assumed that nuclear power tech- nology is representative of aerospace tech- nology with similar probabilities of accidents.

(b) A vailrrbility judgements: lay persons estimate the probability of an event by referring to avail- able instances that can be easily recalled or imagined. The Love Canal incident may have sensitized people to judge the risks of hazardous waste disposal to be great because the instance is easily recalled (available).

(c> Anchoring and adjustment: perceptions are established initially by an original value or assessment. New information requires adjust- ments in the original value, but adjustments are never enough. Consequently, perceptions are ‘anchored’ to the initial judgement. The initial judgement of hazardous waste facilities was formed in the late 1970s with the incident at Love Canal and the subsequent discovery of numerous toxic waste dumps in North America. It is unlikely that new information will adjust perceptions greatly given the established perception of risks.

BY using the heuristic devices outlined above, lay people generate perceptions of risk and structure their behaviour accordingly, and from these percep- tions arise societal norms. GARDNER et al.‘s (1982) survey results for nuclear power facilities indicated that actions taken by the public to influ- ence the decision-making process (lobbying, letters, demonstrations) were systematically correlated with

Page 9: The status and prospects of risk assessment

Geoforumb’olume 15 Number 3/198-1 171

their rated acceptability of risks and the qualitative characteristics of nuclear potver. The perceptions of risk (dread. catastrophic potential) clearly guided public actions. The literature therefore suggests that public perceptions of risk are not irrational; rather, they are reasonable vvhen viewed in the context of common human values and rational within the bounds of commonly used heuristic devices.

bilities are likely to vary considerably from objec- tively measured probabilities (WHYTE, 1983).

Risk and muse

Various models have been proposed for the public perception of risk. The public also thinks of risk in terms of probability and consequence; however, for the pubtic the cause of a risk seems to be a major factor. For public risk perception we may write:

Consequences are seen differently according to a range of factors such as whether the risk is voluntary or involuntary, whether children are at risk or not and whether the expected fatalities are likely to be grouped or randomly distributed. A list of ten fac- tors observed to be of significance in the pubtic perception of risk is shown in Figure 3. When a small sample of respondents were asked to mark the characteristics of nuclear power generation and smoking on these ten scales, a pattern emerged which showed that on eight of the ten scales smok- ing is seen as being on the lower risk perception end of the scale.

Risk = consequenceP x cause x probability(E).

For the public the consequences of risk loom very large. Consequences, in the public perception of risk, include the intendedly objective measurements supplied by scientific analysis raised to a power (P) resulting from the fears generated.

This discussion shows that there can be no easy convergence between the scientific/technical assess- ment of risk and the public or tayman’s perception of risk. While public decision-making would be much more straightforward without the divergence of view which prevails, it is unrealistic to expect that the problem will go away.

Public assessment of risk is not independent of cause. What would otherwise be the same level of risk numerically is differentially evaluated accord- ing to the associations with cause. Acts of God are to be accepted as inevitable. Acts of Man, however,

Nevertheless, choices must be made. It is evident that risk is a part of life and that risk cannot be avoided. The goal of reducing risk to zero is un- realistic and misleading, because as one risk is reduced it is inevitable that others will increase. It is

HIGHER RISK PERCEPTION

l8lYOlUllf~Vf x.

Dread

Children

Immediate

High fatalilies

Bunched fatakties

Not understood

Not expenenced

Uncootroiiabfe

Identifiable victims

LOWER RISK PERCEPTION

Voluntary

Common

No children

Delaved

Low fatalities

Random fatalities

Understood

Familiar

Controllable

Statistical victims

Smoking

Figure 3. A perception of nuclear power vs smoking.

invite the attribution of blame and, increasingly, people are disinclined to accept the notion of Acts of God. Somebody is unusually thought to be at fault. Probability is a concept well understood and appreciated by the public but perceived proba-

possible that cyclamate or saccharin are car- cinogenic, but their elimination might well mean increased consumption of sugar, contributing to obesity and heart disease. The elimination of nuclear generating stations would remove the risks

Page 10: The status and prospects of risk assessment

472 Geoforumi’Volume 15 Number 311983

that they involve, but would result in increased risks elsewhere from other forms of electricity generation, be it coal mining incidents, air pollution or acid precipitation.

Acceptable Risks

There are five kinds of arguments that are habi- tually used to define acceptable risk. These are all comparative in form. They are (i) comparison with natural background or pre-existing levels of risk; (ii) comparison with alternative ways of approaching the same objectives; (iii) comparisons with unre- lated risks; (iv) comparisons with benefits; and (v) risk reduction by cost-effectiveness criteria (WHYTE and BURTON, 1980). The first three of these approaches are logically indefensible, and the fourth and fifth are confounded by problems of measurement. A few words of comment on each are in order.

(i) Comparison with background levels

Proponents of certain risks argue that where the risk only amounts to a small increment over existing or background levels it should be acceptable to reason- able people. An example of this may be found in the report entitled The Management of Canada’s Nuclear Wastes, prepared for the Ministry of Energy, Mines and Resources (see Figure 4).

The figure indicates that radiation from nuclear power development is minute compared with natu- ral background levels, with the use of radiological methods in the detection and treatment of disease, and with fallout from nuclear weapons testing. This argument as it stands is a rationalization. It does not show why an increase in risk, however small, should

be acceptable. The fact that we already face and accept (albeit reluctantly) radiation risks from a number of sources does not mean that we should be content to see that risk increased, even though the increase may be comparatively small. The conclu- sion is that comparison with natural background or pre-existing levels of risk is a spurious argument.

(ii) Comparisons with alternatives

Another approach to acceptability of risk is to argue that the risks associated with the proposed activity are lower than they could be by other means of reaching the same objective. Thus, as Inhaber has argued, the risks of electricity generation in nuclear

Dose rate lmremlyrl lgenertcailv rigmflcantl

0 20 40 60 60 100 1 r 1 I I I I

Radiology

L Nuclear medicine

Nuclear power production

(1 kW per perronl

L Occupational

Other occupational exposures

Consumer goods and c

I I I I I 0 20 40 60 80 100 1

Dose rate lmtem lyrl (genetically slgnlflcant)

Figure 4. Annual genetically significant dose rate as averaged through whole population (taken from AIKEN

ef al., 1977).

stations are less than the equivalent amount of power generation by other means such as oil-fired or coal-fired stations or even by windmills. It is even argued that equivalent energy production by passive solar heating methods could be more risky (people falling off ladders). This approach again begs the question of why we would wish to accept any increased risk at all. Granted that if there is to be more risk it should be as small an increment as possible, but an increment has not been justified.

(iii) Comparisons with unrelated risks

The least acceptable form of risk rationalization is

one that seeks to compare totally unrelated risks. The public has repeatedly been told that nuclear power is much less risky than smoking cigarettes, driving a car, rock climbing or canoeing and SO

forth. The argument here is as before. Since many people are willing to voluntarily accept the risks of smoking or car driving, they ought to be willing to accept the much smaller risks associated with nuclear power. This again begs the question and this time in an extreme form. What has nuclear power generation got to do with an individual’s choice to smoke or drive?

Page 11: The status and prospects of risk assessment

GeoforumAJolume 15 Number 311983

(iv) Risks vs benefits

The fourth approach to risk acceptability is benefit- risk. Here it is simply argued that a guide to the acceptability of a given level of risk is the amount of benefit which is to be gained (WILSON and CROUCH, 1982). This is a sound approach in theory. Its difficulty arises in practice. Benefits are notoriously difficult to measure, as has been well demonstrated over four decades or more of cost- benefit analysis. Dollar values can be imputed for direct and tangible benefits, but there are always indirect and intangible benefits, sometimes of great consequence, that cannot be assessed in dollar terms.

More serious from the perspective of social decision-making is that risk-benefit analysis brings into question the issue of the distribution of benefits vs risks. CommonIy those who benefit do so in differing degrees and those who are at risk are not necessarily the same individuals.

In spite of such difficulties, however, risk-benefit analysis does represent a logical and defensible way of analyzing the acceptability of risks.

(v) Risk reduction by cost-effectiveness criteria

Another approach substitutes cost-effectiveness for benefit and increased safety as the reverse side of the coin of risk. SIDDALL (1980) simply asks the question “How many dollars do we have to spend in order to save a life in various realms or sectors of activity?” As in the case of risk-benefit analysis, the assumptions made and the numbers produced are open to debate. But Siddall’s question is, itself, extremely instructive. It makes us pause to ask if the marginal dollar spent on nuclear safety, the protection of water quality or medical diagnostic tests has the same value in the added increment of safety that it buys.

insofar as the data which Siddall has collected are acceptable, then his analysis shows, as one would expect, that large sums of money are spent in some areas to buy small increments of safety, most notably in nuclear power generation, whereas in some other areas, including medical diagnosis, much less money is required to save additional life. Siddall’s analysis is intendedly rational and his data are open for all to see and criticize. Like other forms of rational analysis, the cost-effectiveness approach involves assumptions that do involve

473

value-judgements. For example, Siddall assumes that one life saved is equal to any other, no matter in what manner it is saved, whereas we know that society accords different values to the manner in which lives are lost.

Means of Determining Acceptable Risk

Experience shows that so far attempts to reduce risk assessment to a rational analytical procedure have not been successful. Indeed, there are signs that the more the scientific community has pressed for its version of rationality, the more resistance has been generated.

At present, developments in risk assessment can be seen dividing into two schools of thought. One school is based on the assumption that additional re- finements in risk measurement will produce more acceptable decisions and will reduce public oppo- sition to risk-generating activities. In this way it is expected that with improvements in the construction of comprehensive predictive models and with the use of more rigorous decision-making methods, such as decision analysis (KEENEY, 1980), the affected public will have their percep- tions altered and will be persuaded of the accept- ability of a given risk.

Notwithstanding the structural difficulties inherent in risk assessment discussed above, this approach does not address the important issue of the maldistribution of risk. WOLPERT (1979) and KASPERSON and KASPERSON (1983) have pointed out that decisions about risk usually pro- duce a regressive outcome in which those who are not able economically or politicaily to reject the risk are forced to tolerate it. As Kasperson and Kasper- son have argued, the existing distribution of risk does not support the assumption that risks are pre- sently accepted, rather:

“risks presently borne suggest more about the balance of political forces that prevailed at the time of their occurrence than about their acceptability to the risk bearer” (p. 139).

Risk assessment used in this manner is apt to be misused as a means of achieving politically palatable decisions (PEARCE, 1981). Inequities in the distribution of risk will go uncorrected and will probably be exaggerated, leading to greater decision-making conflicts.

Page 12: The status and prospects of risk assessment

171

X second school of risk assessment extends the logic of benefit-risk analysis in suggesting that those who bear an inequitable burden of risk. either real or perceived. should be compensated. That is, they should receive benefits to offset the risk they accept. This school of thought suggests that formal risk analyses are conducted as inputs to the decision-making process in which individuals negotiate the levels of risk they are willing to accept and the benefits they will receive. In this process a reasonable analysis of risk describing the proba- bilities of events and their consequences is a sufficient basis on which to begin the negotiation for compensation.

The authors have argued elsewhere (PUSHCHAK and BURTON, 1983) that precedents for compensating risk have been set in Canadian and American practice and that there is evidence to suggest that compensating for risk as part of the risk acceptance process is a cost-effective means of making decisions.

It is important that future changes in risk assess- ment reflect the second school of thought and demonstrate a more sophisticated understanding of the difficulties and a frank recognition that wide divergencies between scientific and public risk perception will remain. The aims must be to con- tinue to develop the art and science of risk assess- ment by creating decision methods that reduce inequities in risk distribution and work towards a steady incremental improvement in public understanding and public trust.

In moving toward mechanisms for equitable risk distribution, it is recognized that the application of reasoned analysis to the judgement of public safety will atways have Iimitations. Although this would require a substantial departure from accepted modes of practice, it might be seen as a return to commonsense basis. Under English Common Law and legal systems based on Common Law, an appeal is frequently made to the concept of ‘reason- ableness’.

In English usage, reasonableness means ‘what is appropriate or suitable to the circumstances’. Court interpretations of the word as it appears in environ- mental legislation put it in the context of the ordin- ary, average man. Reasonableness is that which is “in the scale of prophecy or foresight of reasonable man” (HARPER and JAMES, 1956).

Gsoforum;Volume 15 Number 3j198-1

The application of the legal concept of reasonable- ness to risk assessment suggests four general condi- tions, as outlined by KASPERSON and KASPER- SON (1983):

(A) The ~frj~~~ c~ild~t~on: allocation of risk is just if and only if it maximizes the summed welfare of all members of the morally relevant com- munity.

(B) Tire ability cotzdition: an allocation of risk is just if and only if it is based on the ability of persons to bear those risks.

(C) The compemntion condition: an allocation of risk is just if and only if those assuming the risks allocated are compensated accordingly.

(D) The cur~setzt criteria: an allocation is just if and only if it has the voluntary consent of those upon whom the risks are imposed.

These conditions have been identified many times before. To a smaller degree, risk management has begun to meet them. If risk assessment is, hence- forth, to be able to bear the weight of contemporary skepticism, and if confidence in our democratic institutions is not to be subject to further erosion, then more complete satisfaction of these conditions is in order.

References

AIKIN, A. M., HARRISON, J. M. and HARE, F. K. (1977) Tjre l~~~~~e~n~nf uf Cunn&r’s Nuclear Wnstrs, Report EP-77-6. Energy, Mines and Resources, Ottawa.

BLOKKER, E. F. (1981) The use of risk analysis in the Netherlands, Angewandte Systemanalyse Band, 2, 168-171.

BURTON, I. and POST, K. (1983) The Transport of Dangerous Commodities by Rnil in the Toronto Census Metropofitnn Area: A Preliminary Assessment of Risk. Canadian Transport Commission, Ottawa. May 1983.

BURTON, I., KATES, R. W. and WHITE, G. F. (1978) The Environment as Hazard. Oxford University Press, New York.

BURTON, I., VICTOR, P. and WHYTE, A. V. (1980) The Mississnrr~a ~v~ef~atjon. Final Report to the Ontario Ministry of the Solicitor General, Toronto, Ontario.

BURTON, I., WILSON, J. and MUNN. R. E. (1983) Environmental Impact Assessment: National Approaches and Irtternntionnl Needs. University of Toronto, Toronto.

CLARKE, W. C. (1980) Witches, floods and wonder drugs: historical perspectives on risk management, In: Societal Risk Assessment: How Safe is Safe Enough?,

Page 13: The status and prospects of risk assessment

Geoforum,Volumc 15 Number 31198-I

pp. 287-313. R. C. Schwing and W. Alberts (Eds). Plenum Press. New York.

COVELLO, V. and MENKES, J. (1982) Issues in risk analysis. In: Risk in a Technological Sociee. pp. 2S7- 301. Hohenemser and J. Kasperson (Eds). American Association for the Advancement of Science. Washington. DC.

DOOLEY. J. and BURTON, I. (1983) Risk assessment for the transport of hazardous materials. In: Risk: Symposium Proceedings on the Assessment and Perception of Risk to Human Health in Canada. pp. 81-90, J. T. Rodgers and D. V. Bates (Eds). Royal Society of Canada Ottawa.

FISCHHOFF. B. et al. (1952) Lay foibles and expert fables in judgements about risk, Am. Statist., No. 3. 240-255.

GARDNER, G. T. et al. (1982) Risk and benefit percep- tions, acceptability judgements and self-reported actions toward nuclear power. J. Sot. Psychol., 116. 179-197.

HARPER, F. V. and JAMES, F. (1956) Tile Larv of Torts. Little. Brown and Co., Boston, MA.

HOLDREN, J. P. (1981) Science and personalities revi- sited, Risk Annl., 1. 173-6.

INHABER, H. (197s) Risk of Energy Production. AECB Report No. 1119. Atomic Energy Control Board. Ottawa (March; revision 1 - May: revision 2 - November).

JOHNSON, E. L. (1982) Risk assessment in an adminis- trative agency, Am. Statist., 36. 232-239.

KASPER, R. (1950) Perceptions of risk and their effects on decision-making, In: Societal Risk Assessment: How Safe is Safe Enough?, pp. 71-84. R. Schwing and W. Albers (Eds). Plenum Press, New York.

KASPERSON, R. E. and KASPERSON, J. X. (1983). Determining the acceptability of risk: ethical and pol- icy issues, In: Risk: Symposium Proceedings on the Assessment and Perceptton of Risk to Humnn Health in Canada, pp. 135-156. J. T. Rodgers and D. V. Bates (Eds). Royal Society of Canada, Ottawa.

KATES, R. W. (1978). Risk Assessment of Environmen- tal Hazard, SCOPE 8. Wiley, Chichester.

KEENEY, R. (1980) Siting Energy Facilities. Academic Press, New York.

LAGADEC, P. (1982) /Major Technological Risk: an Assessment of Industrial Disasters. Pergamon Press. Oxford.

LATHROP, J. (1980) The Role of Risk Assessment in Facility Siting: an E.rample from California. IIASA, Laxenburg, Austria, WP-80-150.

LEE, K. N. (1981) A federalist strategy for nuclear waste management, Science. 208, 679-684.

LOWRANCE, W. W. (1976) Of Acceptable Risk: Science and the Determination of Safety. William Kaufmann. Los Altos, CA.

MILLER, D. R. and RIDGEWAY, J. M. (1983) A risk profile for Canadians, In: Risk: Symposium Proceed- ings in the Assessment and Perception of Risk to Human Health in Cunada, pp. 31-42, J. T. Rodgers and D. V. Bates (Eds). Royal Society of Canada, Ottawa.

PEARCE, D. W. (1981) Risk assessment: use and mis- use. Proc. ray. Sot. Lond. A, 276, 181-192.

PUSHCHAK, R. and BURTON, I. (1983) Risk and

475

prior compensation in siting low-level nuclear waste facilities: dealing with the NIMBY syndrome, Plan Canada 23(3). 68-79.

PUSHCHAK. R. and WILSON. J. P. (1981) Environ- mental impact assessment for urban natural areas, In: Urban h’atuml &ells: Ecology and Preservation. pp. 73-90. W. A. Andrews and J. L. Cranmer-Byng (Eds). Institute for Environmental Studies, University of Toronto. Toronto.

RASMUSSEN. N. (1975). Reactor Safety Study. WASH- 1400. Nuclear Energy Study Group, Washington, DC.

ROWE, W. D. (1977) An Anatomy of Risk. Wiley, New York.

SELIKOFF. I. J. (19S3) Multiple factor interaction in environmental disease: potential for risk modification and risk reversal, In: Risk: Symposium Proceedings on the Assessment and Perception of Risk to Humnn Health in Cnnnda, pp. 71-80. J. T. Rodgers and D. V. Bates (Eds). Royal Society of Canada, Ottawa.

SIDDALL. E. (1980) Risk, Fear and Public Safety. Atomic Energy of Canada Limited, Sheridan Park, Ontario.

SLOVIC, P. ef ol. (1979) Images of disaster: perception and acceptance of risks from nuclear power, In: Energy Risk Management, pp. 223-248, G. Goodman and W. D. Rowe (Eds). Academic Press, New York.

SLOVIC, P. et al. (1981) Perceived risk: psychological factors and social implications, Proc. roy. Sot. Lond. A, 376, 17-34.

STARR, C. (1969) Social benefit versus technological risk, Science, 165, 1232-1238.

TVERSKY, A. and KAHNEMAN, D. (1971) Judge- ment under uncertainty: heuristics and biases, Science, 185, 1124-1131.

WELLER, P. and JACKSON, J. (1982) South Cayuga I: lessons in the need for public participation, Alterna- tives, 2-3, 5-8.

WHITE, G. F. (1942). Human Adjustment to Floods: a Geographicnl Approach to the Flood Problem in the United States, Research Paper 29. Department of Ge- ography, University of Chicago, Chicago.

WHITE, G. F. (1966) Formation and role of public atti- tudes, In: Environmental Quality in a Growing Econ- omy, pp. 105-127, H. Jarrett (Ed.). John Hopkius, Baltimore, MD.

WHYTE, A. V. and BURTON, I. (Eds) 1980 Environ- mental Risk Assessment, SCOPE 1.5. Wiley, Chiches- ter.

WHYTE, A. V. (1983) Probabilities, consequences, and values in the perception of risk, In: Risk: Symposium Proceedings on the Assessment and Perception of Risk to Human Health in Canndn, pp. 121-134, J. T. Rod- gers and D. V. Bates (Eds). Royal Society of Canada, Ottawa.

WHYTE, A. and BURTON, I. (1982) Perception of risks in Canada, In: Living with Risk: Environmental Risk Management in Canada, pp. 39-69, I. Burton, C. D. Foule and R. A. McCullough (Eds). Institute for Environmental Studies, University of Toronto, Toronto.

WILSON, R. and CROUCH, E. (1982) Risk/Benefit Analysis. Ballinger, Cambridge, MA.

WOLPERT, J. (1979) Regressive siting of public fa- cilities, Nut. Res. Jl, 16, 103-115.