In Defence of Bad Science and Irrational Policies: an Alternative Account of the Precautionary...

16
In Defence of Bad Science and Irrational Policies: an Alternative Account of the Precautionary Principle Stephen John Accepted: 17 March 2009 / Published online: 28 April 2009 # Springer Science + Business Media B.V. 2009 Abstract In the first part of the paper, three objections to the precautionary principle are outlined: the principle requires some account of how to balance risks of significant harms; the principle focuses on action and ignores the costs of inaction; and the principle threatens epistemic anarchy. I argue that these objections may overlook two distinctive features of precautionary thought: a suspicion of the value of full scientific certainty; and a desire to distinguish environmental doings from allowings. In Section 2, I argue that any simple distinction between environmental doings and allowings is untenable. However, I argue that the appeal of such a distinction can be captured within a relational account of environmental equity. In Section 3 I show how the proposed account of environmental justice can generate a justification for distinctively precautionarypolicy-making. Keywords Precautionary principle . Environmental ethics . Relational conceptions of justice . Risk . Equity At national and at global levels, environmental law and policy is increasingly framed in terms of the precautionary principle(ORiordan and Cameron 1994; Trouwborst 2002). However, many argue that the principle reflects an unwarranted mistrust of science, is too unspecific to guide policy, and that more specific versions are as likely to increase as to decrease environmental risks. 1 In Section 1, I will show how these charges overlook two concerns which might motivate the precautionary principle: a worry about scientific purity; and a concern to distinguish environmental doings from allowings. In Section 2,I will argue that appeals to an environmental doing/allowing distinction are in fact flawed, but provide materials for a relational view of environmental justice. Finally, in Section 3 I will show how this view justifies a precautionaryapproach to policy-making. Ethic Theory Moral Prac (2010) 13:318 DOI 10.1007/s10677-009-9169-3 1 Most notably by Cass Sunstein (2002, 2005). See also Manson 2002. For further discussion, see Sandin et al. 2002. S. John (*) Hughes Hall, University of Cambridge, Mortimer Road, Cambridge CB1 2EW, UK e-mail: [email protected]

Transcript of In Defence of Bad Science and Irrational Policies: an Alternative Account of the Precautionary...

In Defence of Bad Science and Irrational Policies:an Alternative Account of the Precautionary Principle

Stephen John

Accepted: 17 March 2009 /Published online: 28 April 2009# Springer Science + Business Media B.V. 2009

Abstract In the first part of the paper, three objections to the precautionary principle areoutlined: the principle requires some account of how to balance risks of significant harms;the principle focuses on action and ignores the costs of inaction; and the principle threatensepistemic anarchy. I argue that these objections may overlook two distinctive features ofprecautionary thought: a suspicion of the value of “full scientific certainty”; and a desire todistinguish environmental doings from allowings. In Section 2, I argue that any simpledistinction between environmental doings and allowings is untenable. However, I argue thatthe appeal of such a distinction can be captured within a relational account ofenvironmental equity. In Section 3 I show how the proposed account of environmentaljustice can generate a justification for distinctively “precautionary” policy-making.

Keywords Precautionary principle . Environmental ethics . Relational conceptionsof justice . Risk . Equity

At national and at global levels, environmental law and policy is increasingly framed interms of the “precautionary principle” (O’Riordan and Cameron 1994; Trouwborst 2002).However, many argue that the principle reflects an unwarranted mistrust of science, is toounspecific to guide policy, and that more specific versions are as likely to increase as todecrease environmental risks.1 In Section 1, I will show how these charges overlook twoconcerns which might motivate the precautionary principle: a worry about “scientificpurity”; and a concern to distinguish environmental doings from allowings. In Section 2, Iwill argue that appeals to an environmental doing/allowing distinction are in fact flawed,but provide materials for a relational view of environmental justice. Finally, in Section 3 Iwill show how this view justifies a “precautionary” approach to policy-making.

Ethic Theory Moral Prac (2010) 13:3–18DOI 10.1007/s10677-009-9169-3

1Most notably by Cass Sunstein (2002, 2005). See also Manson 2002. For further discussion, see Sandinet al. 2002.

S. John (*)Hughes Hall, University of Cambridge, Mortimer Road, Cambridge CB1 2EW, UKe-mail: [email protected]

1 Charges Against the Precautionary Principle

Debate over the precautionary principle centres around two formulations; the UN RioDeclaration:

where there are threats of serious or irreversible damage, lack of full scientificcertainty shall not be used as a reason for postponing cost-effective measures toprevent environmental degradation (United Nations General Assembly 2002);

and the Wingspread Declaration:

when an activity raises threats of harm to human health or the environment,precautionary measures should be taken even if some cause and effect relationshipsare not fully established scientifically (Wingspread Statement 1998).

Although these statements differ, they (and others) share a common core: we should seekto prevent some threats of damage, particularly threats of serious environmental damage,even when we lack scientific certainty about their existence or magnitude (Raffensbergerand Tickner 1999). The claim environmental policy-makers should take precautions mayseem unproblematic. However, the precautionary principle is controversial.

The principle is usually understood as an alternative to Cost-Benefit-Analysis (CBA).2

Proponents of CBA typically argue that policies should be assessed in terms of efficiency.To do this, we calculate how much each affected individual is expected to benefit or lose byadoption of various policies. The greater the difference between sum total expected benefitsand sum total expected costs, the more efficient the policy. Of course, even if efficiency is avalid social goal, CBA is not unproblematic. First, as CBA’s proponents often concede, theconstruction of a scale for comparing all outcomes of action is controversial (Lenman 2000;Anderson 1993). Second, many defenders of CBA claim that efficiency is not the onlyconsideration relevant to policy, but should be just one input (Schmidtz 2001; Sunstein2002, Chap.5; Hubin 1994, p10). Despite these complexities, CBA is appealing, because itgenerates determinate policy proposals via a clear procedure.

The precautionary principle, by contrast, is indeterminate: it is unclear which threatsdemand which precautions at which level of certainty short of “full scientific certainty”.Proponents of CBA claim they can resolve these ambiguities: precautions are appropriatewhen the expected benefits of precaution outweigh the expected costs (Sunstein 2002,p.104). As an alternative to CBA, the precautionary principle faces three, inter-relatedcharges.3

First, opponents claim that, even if giving absolute priority to avoiding risks of certainsorts of outcomes (such as serious environmental damage) in policy-making is justifiable,the principle demands contradictory courses of action (Sunstein 2002, p.104). Forexample, imagine that planting GM crops risks environmental degradation, while notplanting GM crops risks starvation in the developing world. Given that these are bothserious harms, the precautionary principle would seem to tell us both to plant and not toplant GM crops.

This problem is made worse, opponents claim, by the fact that the precautionaryprinciple demands action even when we lack “full scientific certainty” of a threat’sexistence or magnitude. Imagine we decide that if GM crops pose a risk of bio-diversity

2 For a clear statement of philosophical issues in CBA see Copp 1985.3 See Sandin 2007 for discussion of how these charges are related.

4 S. John

depletion, then they should be banned (regardless of food security issues). Using standardscientific methods, we have not established such a risk. The precautionary principle seemsto imply that, if there remains any doubt as to their environmental impact, we ought not toplant GM crops. However, any agricultural policy might pose some non-established risk ofenvironmental damage, and therefore might be suspect on precautionary grounds. Theprecautionary principle’s weak epistemic standards are therefore alleged to threatenparalysis, since any course-of-action might be suspect (O’Neill 2002, Chap.1).

Third, proponents of the precautionary principle are accused of overlooking thisproblem, because they focus on the possible costs of action, rather than of inaction (forexample, the risks of GM crops, rather than the risks of continuing the agricultural status-quo) (Wildavsky 1997). The proponent of the principle is thus faced with a dilemma: eitherthe principle applies both to action and inaction (in which case, it leads to paralysis), or itapplies only to action, in which case it might lead to greater environmental problems thanit prevents.

The charges are extremely serious. One response is to accept their cogency and toreformulate the principle accordingly. For example, Stephen Gardiner has interpreted theprecautionary principle as a Rawlsian maxi-min rule for decision-making under uncertainty(Gardiner 2006). For Gardiner, the principle is not an alternative to CBA, but to be appliedwhen CBA is inappropriate. An alternative response to the charges is to show that they arenot as clear-cut as they first appear. One way to do this is to show that the principle mightbe motivated by concerns which its opponents overlook. In the rest of this section, I shallidentify two such concerns: the first related to “full scientific certainty”, and the secondrelated to the doing/allowing distinction.

When testing whether there is a relationship between two classes of events (including aprobabilistic relationship) scientists typically adopt statistical procedures which minimise“false positives”. This ensures they assert that there is a (determinate or probabilistic)relationship between two classes of events only when they have extremely good reason tobelieve so. However, such procedures increase the chance of “false negatives”, that is,failing to assert that there is a relationship when there is a relationship. Therefore, insistingthat only claims established with “full scientific certainty” be used in policy-making mightlead to adoption of unacceptably risky policies.

Versions of this concern have been developed by several philosophers, and have beenclaimed to motivate some formulations of the precautionary principle.4 We might, then,deflect the criticism that the principle leads to an incapacitating epistemic free-for-all, byarguing that the principle alerts us to the need to distinguish the epistemic standards properto “normal science” (where “full scientific certainty” is the appropriate standard forclaiming threats exist) from those proper to “regulatory science” (where weaker standardsare appropriate).5 Understanding the “lack of full scientific certainty” clause by appeal tothese considerations is particularly powerful as a response to proponents of CBA. The claimthat we ought to vary testing methodologies with the costs of false-positives and false-negatives has a long history in rational choice theory (Hacking 1975, 1990). CBA is closelylinked to consequentialist moral theory, and in turn to rational choice (O’Neill 1993,Chap.5). Therefore, the defender of the precautionary principle seems, if anything, truer tothe foundations of CBA than does the proponent of CBA.

4 See Hansson (1998, 2006, 2007); Shrader-Frechette 1995; and Cranor 1993 for discussion. Although theclaim that “scientific purity” may be problematic is common, there is disagreement over how such worriesrelate to the precautionary principle.5 See Jasanoff 1990 for further discussion of the development of “regulatory science”.

In defence of bad science and irrational policies 5

Consider now the criticism that the precautionary principle focuses on the risks ofaction, and ignores the (possible) costs of inaction. Opponents claim that this reflects amore general “failure” of human cognition, such as “loss aversion”, or an implausible viewof nature as benign and fragile (Sunstein 2002, p.42; Sunstein 2003, p.1009). However,recent work by Marion Hourdequin disputes these charges (Hourdequin 2007).6

Hourdequin claims that we often believe that it is worse for humans to destroyenvironmental goods than for those goods to be destroyed by natural processes; forexample, it is worse for a species to become extinct because of human intervention thanbecause of natural pressures. She suggests that this doing/allowing distinction explains (andperhaps justifies) the apparent myopia of the precautionary principle: we are interested inminimising the risks we impose on the environment, even if minimising our impact doesnot minimise overall risk.7

The possibility that the precautionary principle is motivated by either or both of theconcerns outlined above shows that some attacks on the principle are not as clear-cut asthey first appear.8 However, I have not shown that these concerns should be reflected inenvironmental policy-making. Furthermore, combining these two concerns may beproblematic, as the “epistemic purity” concern seems to accuse CBA of not beingconsequentialist enough, whereas the “doing/allowing” concern seems to accuse CBA ofoverlooking non-consequentialist considerations. Therefore, to assess the precautionaryprinciple we need to know whether these concerns are valid and how they relate.

2 Institutional Attitudes and the Doing/Allowing Distinction

The suggestion that CBA is inappropriate within environmental policy-making because itignores the doing/allowing distinction has received little attention. In this section, I shallsuggest a good reason for this: establishing the cogency and significance of that distinctionwithin environmental contexts is extremely difficult. So much the worse for theprecautionary principle, it may seem. However, this conclusion would be too quick.Rather, appeal to the doing/allowing distinction can be understood as an attempt to capturea more plausible concern, that we should adopt a relational conception of environmentaljustice. In this section, I shall outline such a conception, before in Section 3 showing how itrelates to the precautionary principle, and, in particular, to the “full scientific certainty”concern.

The distinction between environmental doings and allowings faces two difficulties. First,why think the doing/allowing distinction is morally relevant in the context of humaninteractions with the natural environment? Hourdequin’s key argument is that “naturallyincurred” risks cannot be described as fair or unfair, but merely as instances of misfortune;humanly caused risks, by contrast, can be described as inequitable. Therefore, if we areconcerned with equity, we should not treat action and inaction symmetrically. However,

6 For further discussion along similar lines, see Hughes 2006 and John 2007. Strangely, Sunstein suggeststhat the precautionary principle might incorporate an acts/omissions distinction, but simply assumes that thisdistinction is irrelevant to policy (Sunstein 2007).7 Note that appeal to a doing/allowing distinction is not the same as status quo bias. For example, forHourdequin, if (on-going) global warming is anthropogenic, our reasons to prevent warming are strongerthan if it is not.8 Such worries suggest difficulties for Hubin’s claim that the use of CBA can be accepted independently ofour over-arching ethical theory (Hubin 1994).

6 S. John

Hourdequin’s argument is problematic. Given that we normally think of equity in terms ofagents’ treatment of other agents, it is unclear what is involved in treating the naturalenvironment itself in an inequitable manner.9

Distinguishing environmental doings from allowings faces a second problem. Mankind’sinteractions with the natural environment are so complex that it will often be extremelydifficult to establish direct causal links between human action and environmentaldegradation.10 Many apparent “allowings” are likely to involve human agency, and human“doings” often involve natural factors. These worries are intensified as technology growsmore powerful, with greater possibilities of unanticipated side-effects. Hourdequin herselfnotes this problem, suggesting that the complex causal structure of environmental changebecomes “increasingly difficult for our traditional moral concepts—of agency, responsibilityand the distinction between doing and allowing—to handle” (Hourdequin 2007, p.358).11

There are, then, serious difficulties with applying and justifying a doing/allowingdistinction in the environmental context. However, I will now argue that Hourdequin’sclaim in favour of this distinction, that there is a difference between “misfortune” and“inequitable” treatment, can ground a non-consequentialist approach to environmentalpolicy. To do so, I will consider recent work by Thomas Pogge on health equity (Pogge2004). Pogge claims that contemporary discussions of health equity typically identify justinstitutions as those which promote a particular distribution of health outcomes. Hecontrasts such “recipient-oriented” conceptions of justice with the Rawlsian “semi-consequentialist” conception. According to the semi-consequentialist, “the purpose of asocial order is not to promote a good overall distribution of goods and ills… but to dojustice to, or to treat justly all those whose shared life is regulated by this order” (Pogge2004, p.154). Pogge claims that Rawls’s general conception of justice is attractive, but, asthe case of ill-health illustrates, too crude. First, the interpenetration between the natural andthe social makes problematic Rawls’ neat division of benefits and burdens into “natural”(pre-institutional) and “social” (institutionally-created). Second, Rawlsian semi-consequentialism overlooks morally salient differences between ways in which socialinstitutions can relate to the distribution of benefits and burdens.

To motivate these claims, Pogge lists six different ways in which the a particularnutritional deficit might relate to social institutions: the deficit might be officially mandatedby those institutions; it might be “legally authorised”; the institutions might “foreseeablyand avoidably engender” the deficit; it might arise from legally-prohibited but barely-deterred interpersonal behaviour; the institutions might avoidably leave unmitigated theeffects of a natural defect; the institutions might avoidably leave unmitigated the effects of aself-caused defect. It seems that even if the health outcome is the same in each of thesecases, the nature of institutional involvement is relevant to claims about equity. For Pogge,our account of health equity should recognise this and “weigh the impact which institutionshave on quality of life according to how they have this impact” (Pogge 2004, p.156).

For Pogge, the distinctions between different ways in which institutions might beimplicated in outcomes should not be interpreted solely in causal terms (e.g. on a scale from

9 Hourdequin suggests two other reasons to adopt a doing/allowing distinction, and thus to deny CBA: first,CBA overlooks concerns about unequal distributions of risks; second, the structure of moral agency requiresus to distinguish doings from allowings. However, the first of these complaints could be incorporated withinCBA. The second claim only shows that we need to distinguish doings from allowings in some, not in every,context.10 For further discussion of the doing/allowing distinction in environmental ethics see Thompson 2006.11 See Cranor 2007 p.38 for related concerns.

In defence of bad science and irrational policies 7

most to least causal involvement). Rather, “what matters is not merely the causal role… butalso what one might call the implicit attitude of the social institutions in question” (Pogge2004, p.158). To understand Pogge’s claims, imagine two sets of social institutions, the firstof which legally discriminates against a racial group, whereas the second of whichavoidably fails to help those who recklessly engage in unhealthy behaviour. Even if healthoutcomes in both societies are the same, and the relevant social institutions are equallycausally implicated in both outcomes, the first society seems worse than the second. In thefirst society, the laws express attitudes which are clearly inequitable, whereas in the secondsociety they do not. Of course, the attitude expressed by the second set of institutions is notunproblematic. However, the first attitude seems more problematic than the second, and itseems that this concern should be reflected in our theory of health equity.

Although Pogge’s theory requires refinement, its relevance to my concerns is clear.Hourdequin suggests that there is a morally salient difference between risk imposition andmere misfortune. Pogge’s work builds on a similar intuition. However, Hourdequinunderstands the misfortune/inequitable treatment distinction in terms of a binary doing/allowing distinction, related, in turn, to a natural/social distinction. Given the inter-relationships between the social (what is done) and the natural (what is allowed), herconclusion is unsustainable. By contrast, Pogge’s work suggests that even if the natural andthe social interpenetrate, we can still distinguish between different attitudes (implicitly)expressed by social institutions implicated in the complex causal processes which lead tooutcomes. In turn, these institutional attitudes should be a focus of normative politicalcriticism and action.

To develop something like Pogge’s theory in the environmental context, we need anaccount of equity and we need to identify both the agents of justice (those who expressinequitable attitudes in their treatment of others) and the patients of justice (those whosuffer inequitable treatment). In the rest of this section I will identify the agents and patientsof environmental justice, and return to equity in Section 3.

Pogge identifies agents of justice with schemes of social co-operation (what Rawlscalled the “basic structure”). However, this is problematic both generally—it is unclear howshared systems of co-operation can express attitudes—and in the specific context ofenvironmental policy, where the question is how to think about particular kinds ofdecisions, rather than social structures generally. I suggest that, in the environmentalcontext, we should identify as primary agents of justice those governmental and trans-national policy-making bodies charged with regulating various activities in the name ofenvironmental protection. One reason to focus on such agencies is that because theytypically possess (quasi-)coercive powers, it is particularly important that they act equitably.A second reason relates to the aims of this paper: it is such agencies which are usuallyenjoined to adopt the precautionary principle. Of course, there is a puzzle in understandingthe claim that corporate bodies, such as governmental agencies, express attitudes. However,we often speak of the attitudes of corporate agents, and such talk seems less problematicthan talk of the attitudes expressed by systems of co-operation.12

What, though, count as patients of justice in the environmental setting? Only agents canbe the subjects of equitable or inequitable treatment. Although I have assumed that, at leastfor some purposes, governmental agencies might be viewed as intentional agents, it seemsimplausible to view eco-systems as the kinds of agents who might be treated inequitably.

12 On corporate agency in general, see French 1984. For this concept in the context of CBA, see Copp (1985,esp. 138–145).

8 S. John

Therefore, to incorporate equity concerns in environmental ethics, we need an anthropo-centric account of environmental value. This might seem anathema to the concerns ofenvironmentalists. However, the resources available to a sophisticated anthropocentricismshould not be under-estimated.

Standard anthropocentricism tends to assume that environmental value can beunderstood as a conjunction of the effects of environmental change on human agents(such as ill-health) and the value which humans place on the environment (as reflected bytheir willingness-to-pay to prevent degradation).13 Such views over-simplify by treating theenvironment and human agents as related, but distinct. Rather, an anthropocentric accountof environmental value should stress that because human agents are human animals theircapacities for agency are shaped and constrained by natural environments. Humancapacities for agency are damaged directly when humans fall ill or suffer other directharms from environmental damage. However, environmental damage might also undercutagency in other ways: living in an unstable environment can undermine our capacity to planfor the future, or leave us vulnerable to exploitation; insuring against catastrophe mightleave us with fewer resources with which to pursue our goals; agency relies on practicalidentity, which is shaped by our sense of how the natural environment has been and oughtto be.14 To develop these claims would take much space. However, my point is simply thateven if an equity-based approach cannot capture all of the traditional concerns ofenvironmentalists, it may capture more than is normally recognised.

Having identified regulatory agencies as agents of environmental justice, and humanindividuals as the patients harmed by environmental degradation, I shall now sketch how arelational conception of environmental justice relates to the doing/allowing distinction.Typically, regulatory agencies must decide whether some course-of-action to be pursued byother agents, such as business corporations or private individuals, should be allowed to goahead, should be stopped, or should go ahead in some limited way or with safeguards. Therelevant courses-of-action might be novel—for example, the cultivation of GM crops—oron-going—for example, those allowed by current fishing regimes. When agencies face sucha decision they must take into account the (potential) costs of permitting those courses-of-action. However, it also seems that, as governmental agencies committed to serving theentire population, they should also consider the opportunity costs of not permitting, orlimiting, those courses-of-action. As opposed to Hourdequin’s approach, a relationalapproach to environmental justice does not, then, in-and-of itself, imply that environmentalagencies should pay more attention to what they “do” (in an extended sense, to mean whatforeseeably occurs as a result of allowing courses-of-action to go ahead) than to what they“allow” (to mean what foreseeably happens when courses-of-action do not go ahead).

However, when agencies decide whether or how to regulate, there are multiple ways inwhich they might take the potential consequences of courses-of-action into account. Thesedifferent ways of taking consequences into account can be seen as expressing attitudestowards potential patients of justice which, in turn, can be said to be more or less equitable.I suggest that it is with regard to these attitudes that we should frame an account ofenvironmental justice. If so, there is no reason to think, as opponents of the precautionaryprinciple often seem to, that a decision-making procedure must be problematic if it leads tosub-optimal consequences. Even if there is no simple environmental doing/allowingdistinction, Hourdequin is right that equity considerations undermine apparently “obvious”charges against the precautionary principle.

13 For a useful outline of such approaches, see Dasgupta 2001.14 For versions of each of these claims, see O’Neill 1996; Dercon 2004; Korsgaard 1996.

In defence of bad science and irrational policies 9

3 Reconstructing the Precautionary Principle

So far, I have provided a framework for understanding environmental policy-making, ratherthan a justification of the precautionary principle. It is possible that consideration of thedemands of equity might not actually justify the precautionary principle, but instead showthat techniques such as CBA reflect equitable attitudes. In this section, then, I shall arguethat if we assume equity to demand that the principles underlying decision-making arejustifiable to each affected person, then the relational conception of environmental justicejustifies distinctively “precautionary” policies.15

As I understand it, the precautionary principle claims that when some proposed or on-going course-of-action poses a threat of serious or irreversible environmental damage,regulatory agencies should not allow that course-of-action to go ahead without safeguards(at the extreme, they should ban it), even if they lack scientific certainty about the existenceor magnitude of the threat. The precautionary principle seems parasitic on other decision-making principles, because it does not give guidance when courses-of-action do not pose athreat of serious or irreversible environmental degradation. The complaint of its critics,however, is that by demanding action even in the absence of full scientific certainty, theprinciple threatens to swamp all other decision-procedures, and to lead to an endless cycleof precaution. Furthermore, they complain that, despite the appeal to “cost-effectiveness” inthe Rio declaration, proponents of precautionary policies focus attention on regulating orpreventing (suspected to be) risky courses-of-action, with disregard for (opportunity) costs.

One element of debate over the precautionary principle concerns the concept of fullscientific certainty. However, to understand the relationship between ethical and epistemicconcerns, it is useful to ask how we might justify precautionary policy-making incircumstances of “epistemic transparency”, where if there is a risk, then we believe thatthere is a risk, and if there is no risk, then we do not believe there is a risk. I shall return todecision-making in epistemically murky situations, where we have reason to believe thatour beliefs about risks are not “complete”, in Section 3.2 below.16

3.1 Precaution in Epistemically Transparent Worlds

I shall distinguish three kinds of cases in which a regulatory agency might make decisionsabout some proposed or continuing course-of-action under epistemic transparency: first,cases where the course-of-action would definitely lead to, or is known to pose a risk of,mild or reversible environmental damage affecting some in the population but benefitingothers; second, cases where the course-of-action would definitely lead to serious andirreversible environmental damage affecting some but benefiting others; third, cases wherethe course-of-action is known to pose a risk of serious or irreversible environmental damageaffecting some but benefiting others. In these cases, I assume that a regulatory agency must

15 This account of the demands of equity derives, of course, from Scanlon’s work, and is related to strands incontemporary Kantianism (see Scanlon 1998; O’Neill 1996). The precise relationship between my argumentsand such theories is, however, beyond the scope of this paper.16 The distinction between “epistemic transparency” and “epistemic murkiness” does not imply any viewabout the existence of objective risks. Rather, even if all risk claims should be understood in epistemic terms,most theories allow for a “gap” between what we do believe and what we ought to believe. My claims couldbe rephrased as distinguishing between circumstances where our beliefs about the risks of action are as theyought to be and cases where we must establish what the correct beliefs are. For a guide to these issues seeMellor 2005.

10 S. John

choose between allowing the course-of-action, banning it, or allowing a modified version ofit to go ahead.

Agencies which allow courses-of-action which will cause (or which will risk causing)mild damage affecting some, but benefiting others, need not be seen as expressinginequitable attitudes. Rather, an agency which was concerned with considerations of equitymight reasonably base its decisions as to whether or not to regulate by asking whether asystem of regulation which, in general, allowed for (risks of) minor forms of environmentaldamage was one which, overall, was beneficial to each. A system based around a principleof banning all activities which cause or risk environmental damage would be one which, inthe long term, each would have reason to reject. Therefore, equity considerations mightallow for courses-of-action which cause (or risk) limited or reversible damage by appeal tosomething like Hansson’s principle that “exposure of a person to risk is acceptable iff thisexposure is part of an equitable social system of risk taking that works to her advantage”(Hansson 2003, p.305).17 The contours of such a system would, of course, require furtherconsideration. However, I shall now move onto the more difficult cases.

In the second case, some course-of-action is known to cause serious and irreversibleenvironmental damage affecting some in the population, but benefiting others. I shall firstdiscuss such cases on the assumption that the relevant benefits for affected individuals areless weighty than the relevant harms for affected individuals (for example, the benefits ofslightly reduced food costs versus the burdens of loss of livelihood). However, I do notassume that the sum-total harms are necessarily greater than the sum-total benefits, at leastas judged by CBA. In such cases, I suggest that it is difficult to see how an agency mightjustify allowing the relevant course-of-action to those individuals who will suffer fromenvironmental degradation. To do so would, in effect, be to base decisions on a principlewhich allows individuals to suffer agency-undercutting harms for the sake of aggregateminor benefits to many others. According to most contemporary accounts of equity, thisform of aggregation is inequitable (see, for example, Scanlon 1998, Chap.5). Therefore,allowing courses-of-action which are known to cause serious harms to some to go ahead is,in some cases, to fail to express equitable attitudes.

We might think such damage could be “compensated”. If so, then it might follow thatsuch courses-of-action are permissible if appropriate compensation mechanisms are inplace. Unfortunately, however, the complexities of the dependence of human agency on theenvironment may make the harms of living in severely-degraded environments uncom-pensatable. Furthermore, even if we did think that the imposition of harm via environmentaldegradation on some might be justifiable if compensated, CBA would be of little use indeciding which policies are acceptable. All that CBA tells us is that a policy is potential-Pareto optimal (i.e. the sum total benefits are such that each could be moved to a situationpreferable to her starting-position), not that it is actually Pareto optimal (i.e. that each isactually made better-off by the policy). Extremely efficient policies may not involve actualcompensation (Copp 1987). I suggest, then, that equity considerations make the use ofCBA in cases where environmental damage is serious and irreversible extremelyproblematic. Conversely, they seem to support a principle of not allowing courses-of-action which would wreak such devastation, regardless of the expected overall balance ofcosts and benefits.

This line-of-reasoning may, however, face difficulties as a defence of precautionarypolicy-making. In some cases refusing to impose environmental damage on some might beto forego saving (equal or greater numbers of) others from equally serious harm. For

17 See Lenman 2008 for a related suggestion.

In defence of bad science and irrational policies 11

example, refusing to allow the planting of a crop which will damage parts of theenvironment might be to forego helping many starving people. Surely, the response mightrun, at least when the relevant harms and benefits are of roughly equal moral weight,regulatory agencies should consider the claims of those who would be benefitted byallowing a course-of-action as well as those who would be harmed by allowing it (a claim Iendorsed in Section 2). If so, perhaps equity demands that we ought to decide whether toallow courses-of-action by considering the effects of both allowing and not allowing interms of some kind of limited aggregation (e.g. where we only aggregate roughly equalharms and benefits). In some cases, such principles might permit courses-of-action whichare known to cause serious or irreversible environmental degradation. These are, however,the sorts of courses-of-action which the precautionary principle is normally understood todisallow. Therefore, even if equity considerations make CBA problematic, in the absence ofcommitment to an environmental doing/allowing distinction, it seems that a relationalconception of environmental justice may not justify precautionary policies.

In response, I suggest that a focus on the demands of equity might allow us to generate adistinctively “precautionary” approach without either appealing to a doing/allowingdistinction or denying the importance of considering the claims of those who would behelped by foregone courses-of-action. Most regulatory contexts display an importantasymmetry. Not allowing a course-of-action which would lead to agency-undercutting harmfor some but which would benefit extremely badly-off others is not necessarilyincompatible with adopting alternative courses-of-action which would benefit those wefail to help. Allowing such a course-of-action is, by contrast, necessarily incompatible withhelping those who are harmed (assuming that such harm is uncompensatable). If so, then itseems that those who will be harmed by allowing a course-of-action to go ahead have astronger complaint against the regulator’s decision than do those who would be helped byallowing that course-of-action to go ahead (to the extent that the latter’s complaints could,in principle, be neutralised by adopting further policies). From this, I suggest that it followsthat equitable decision-making should not be based on a principle which mandates courses-of-action whenever they can be expected to have a positive aggregative effect (even whenthe relevant consequences are limited to roughly equivalent forms of burden and benefit).Rather, at least when those we fail to help may still be helped via other routes, equityconsiderations demand that policy-making should rest on a principle which disallowscourses-of-action known to lead to agency-undercutting harm.

I have argued that when some agency allows a course-of-action which is known to causeagency-undercutting environmental damage, the fact that doing so will benefit some is neversufficient to claim that the decision is equitable. Rather, in general, equity considerations willfavour disallowing known-to-be-harmful courses-of-action, even when they would lead to“better” outcomes. What, though, of cases where some agency must decide whether or not toallow some course-of-action which is known to pose a risk (rather than a certainty) of seriousor irreversible environmental damage affecting some, but definitely benefitting others?

There are, of course, good reasons to permit proposed courses-of-action which produceimportant benefits. However, when an agency permits a course-of-action which poses a risk ofserious harm to some on the grounds that this will benefit others, those placed at risk of harmhave good prima facie reason to object to that course-of-action. Furthermore, to appeal toexpected aggregate expected benefits to justify permitting such courses-of-action would notseem to take these complaints seriously. However, unlike in the case where a policy willnecessarily cause harm, when policies only risk harm, these complaints might be taken intoaccount by adopting further measures which aim to eliminate, substantially reduce ormitigate the risks associated with the course-of-action. Modifying a course-of-action through

12 S. John

adopting precautionary measures can be seen as a way of attempting to ensure that decisionsare justifiable both to those who would benefit from a course-of-action and to those placed atrisk by a particular proposed courses-of-action. In short, there are good equity reasons tothink that, in general, a principle that courses-of-action which pose a risk of harm to someshould not go ahead without precautions best expresses the concerns of equity.18

Furthermore, note that attempting to limit such precautions by claiming that they areinefficient would be, in effect, to attempt to justify the burdens (risks of harm) inflicted onsome by appeal to benefits enjoyed by others. As I argued above, such a mode ofjustification seems at odds with equity concerns. Therefore, for agencies to express anattitude of equitable treatment they should take precautionary measures to mitigate or toreduce the risks of harm associated with the policies they pursue, even when thoseprecautions are “inefficient” by the standards of CBA. This is not to say that agencies mustalways adopt every possible precaution. It is, however, to say that when policies pose risks,precautions are typically necessary, and the limits on those precautions need to considerjustifiability to each person, rather than expected net outcomes.19

Of course, there might be extreme cases, where no precautions could be taken againstthe risks associated with some course-of-action which would definitely benefit some to agreat degree (much as there might be cases where policies which will definitely harm someare the only way in which to help many suffering others). In these cases, we must decidewhether sometimes imposing significant harms or risks of harm may be justifiable to reducethe serious harms already suffered by others. However, few real-life cases are this stark.Rather, in many cases, we have a range of options, including going ahead with policy buttaking precautions. It is in this range of cases, I suggest, that the ethical appeal of theprecautionary principle is best understood.

My arguments help us to understand two distinctive, and puzzling, features ofprecautionary policy-making. First, I have shown how we might justify focusing moreattention on the costs and risks of various policies, rather than on the overall balance ofoutcomes, without appeal to a problematic doing/allowing distinction. Second, I have shownhow and why precautionary concerns only apply in cases of serious damage by showing howequity considerations may vary between cases where courses-of-action cause (or risk) minordamage and cases where they cause (or risk) serious or irreversible damage. In the first case,the long-term benefits of a system allowing such damage might be to the benefit of thoseharmed, whereas in the second case, the relevant harms are uncompensatable.

Two important clarifications are in order. First, although I hope to have shown how arelational account of environmental justice generates results which seem in accord withboth statements and applications of the precautionary principle, I have not specified exactlywhat kinds or levels of precaution are necessary with regard to which degrees of risk ofwhich forms of harm. There is good reason for this. As Pogge’s account of public healthethics suggests, the demands of equity may be extremely complex (to include, for example,concerns for historical injustice). As such, they are unlikely to be captured by any

18 Cranor 2007 and Lenman 2008 both discuss how Scanlon’s account of equity might apply in the contextof imposing risks of harm (including environmental harm). My arguments differ from Cranor’s in that theydo not start from claims about how those who suffer from imposed risks perceive those risks. My focusdiffers from Lenman’s, because I am concerned specifically with institutional actors whom I assume havespecial responsibilities to promote welfare.19 These claims can be seen as fleshing out Lenman’s principle that “in imposing risks on a population ofpeople, I should act in a manner consistent with my being guided by the aim of being able to satisfy eachmember of that population that I acted in ways supported by reasons consistent in principle with the exerciseof reasonable precaution against their coming to harm” (Lenman 2008, 111).

In defence of bad science and irrational policies 13

algorithmic formula. Therefore, as I understand it, the precautionary principle should not beunderstood as itself a decision procedure, but more as an aide memoire, which remindspolicy-makers of some of the key demands of equity. If so, then it may be a mistake to askhow the precautionary principle should guide particular decisions, or how we can checkwhether policy-makers have applied the precautionary principle.20 Rather, what reallymatters is that environmental policy-making expresses equitable attitudes: the precautionaryprinciple is useful insofar as it reminds policy-makers of that multi-faceted demand.

Second, in discussing precautionary policy-making, I have focused on cases whereagencies must decide whether or not to allow certain sorts of activities to go ahead orcontinue. The precautionary principle is often invoked in cases where agencies must decidehow to respond to some exogenous threat with low or uncertain probability but potentialhigh impact (for example, a terrorist attack or meteor strike).21 Nothing I have said helps usto understand decision-making in such cases. I accept this may seem a limitation on myarguments. However, the idea that there is a single principle which tells us how to deal withall risks of serious or irreversible damage is, I think, a mistake. At least, demands for such aprinciple seem in tension with the concerns of a relational account of justice, according towhich normative attention should not focus solely on securing certain sets of outcomes, buton the attitudes expressed in policy-making.

3.2 Precaution in an Epistemically Murky World

Our world is “epistemically murky”. Our knowledge of the existence and magnitude ofrisks associated with actions is fallible.22 Thus far, my arguments have not depended onknowing precisely who will suffer the burdens and benefits associated with policies: myclaim is not that policy must be guided by the actual complaints of those we help or harm,but that it must be based on considering what sorts of principles could be justified toreasonable agents.23 Furthermore, thus far I have stressed that the precise probability ofharm associated with some course-of-action is irrelevant to the claim that we should, ingeneral, regulate that course of action. However, it is clear that the degree of probabilitywill, even on a relational account, often be relevant to what precautions we should take.Furthermore, even if acting equitably does not require knowing the actual views of thosewho will be benefited and burdened by a course-of-action, it requires knowing whether acourse-of-action is the kind of course-of-action which has certain sorts of benefits and risks.Therefore, we need to know how regulatory agencies should decide on whether courses-of-action do pose risks of harm, and, if so, how great those risks are.

In Section 1, I suggested that we can understand one motivation behind theprecautionary principle as a concern that standard approaches to statistical testing,including attempts to establish claims about risks, are likely to generate “false negatives”.

20 On my reading, then, such questions as whether the precautionary principle is itself justiciable (asdiscussed, for example, in Fisher 2001), would be misguided. What matters is that regulatory agencies actequitably; this might occur without explicit appeal to the precautionary principle; and appeal to theprecautionary principle might not be sufficient for equitable treatment in some cases.21 See, for exqueryample, Wiener and Stern 2006, for this way of understanding the principle in a discussionof the threat of terrorism.22 In Hansson’s terminology, we are faced not just with endodoxastic uncertainty—uncertainty over theoutcomes of actions—but metadoxastic uncertainty—uncertainty over the correctness of our endodoxasticassessments (Hansson 2006, 233–234).23 Although, of course, actual consultation may be necessary in many cases for all sorts of reasons notdiscussed in this paper.

14 S. John

In turn, given that we often (erroneously) treat absence of evidence as evidence of absence,basing policy only on those claims we have established with scientific certainty might leadus to adopt unacceptably risky policies. I claimed, then, that proponents of theprecautionary principle might be understood as suggesting that “regulatory science” (i.e.scientific research intended to guide policy-decisions) should employ different epistemicstandards from those employed in “normal science”. How might we understand these issueswithin my proposed relational framework?

We can recast this problem in terms of “attitudes”: agencies which base policy only on thoseclaims which have been established with full scientific certainty can be said to express anattitude which values “epistemic purity” over reducing risk to human agents. It may not beimmediately clear that such an attitude should be thought of as “inequitable”: basing policyonly on claims established with scientific certainty does not disadvantage certain groups morethan others.What might be problematic, however, in insisting that “regulatory science” be heldto the same epistemic standards as “normal science” is that such a policy might implicitly makea value judgment that “epistemic purity” matters more than reducing risks. In turn, this valuejudgment might not be shared by those whose lives are shaped by regulatory (in)action, and,therefore, the burdens individuals suffer may be determined through processes to which theyhave reasonable objections. My question, then, is whether we can justify a commitment toacting only on claims which have been established with full scientific certainty to those whomight be exposed to unnecessary risk as a result of our policies. If not, then we have goodreason to think that policy-makers should consider risks even when those risks have not beenestablished with full scientific certainty, but only to some lesser degree of certainty.

One answer here would be to say that individuals have some interest in policy beingbased only on those claims which are extremely likely to be true. However, consider aneveryday case: imagine someone recovering from pneumonia who takes an umbrella towork even when the sky is clear. It seems that he is acting as if he believes it might rain,even though he lacks certainty. In everyday contexts, then, part of what is involved inadopting a precautionary attitude is to act as if one believes certain claims as a way ofguarding against the possibilities which follow if those claims are true. Therefore, to saythat individuals have some interest in policy-making being guided only by those claimsestablished with scientific certainty, rather than to a lower degree of certainty, misrepresentsa familiar feature of everyday life (Sandin 2007).

This is not to say that it is always reasonable to (act as if we) believe that there is apossibility that we will suffer some harm. For example, the choice to take an umbrella to workwould be ridiculous if we lived in the desert. However, the degree of certainty at which itbecomes reasonable to act as if there is a possibility of rain need not be identical with thedegree of certainty at which the scientist would claim that there is a possibility of rain. It is notas if we have a stark choice between requiring scientific certainty for all of our beliefs and anepistemic policy where “anything goes”. The context of environmental policy making is, ofcourse, far more complex than the everyday context. However, it would be disingenuous tosay that a refusal to base policies on claims which have not been established with fullscientific certainty is justifiable to individuals because it reflects their own value judgments.

A second defence of reliance on claims established with full scientific certainty wouldappeal to practical considerations. One claim against use of the precautionary principle inpolicy contexts is that its application would lead to contradictory recommendations. Itmight seem that the epistemically wanton character of the principle generates this result.Therefore, we might seek to justify reliance only on claims established with full scientificcertainty by saying that only a system which adopts such a standard is practical. Clearly,choice of some principle (in this case, an epistemic principle demanding a high level of

In defence of bad science and irrational policies 15

certainty for fact-claims in policy) can be justified if the alternative principles would beimpracticable.

However, this defence is confused. Remember, according to the arguments of 3.1,precautionary decision-making does not tell us that when our policies pose some risk ofserious or irreversible damage, then we should never go ahead with those policies. Rather,as I have claimed, the insight captured by the principle is that going ahead with policieswhich pose risks of serious damage is acceptable only when we take precautions againstthose risks. I can see no reason to think that such an approach to policy-making (intendedfor an epistemically transparent world) is necessarily self-contradictory. However, if anapproach intended to help us decide how to think about risky courses-of-action is not self-contradictory, then there is no reason why varying the epistemic standards by which weestablish risk-claims must make that approach self-contradictory. Of course, adopting alower standard-of-proof for generating fact claims might require us to guard against illusorythreats. However, “waste” is not the same as paralysis.

In this section I have suggested that a regulatory agency which countenances as realpossibilities only those claims (including probabilistic claims) which have been establishedwith full scientific certainty might fail to express an equitable attitude. I have not arguedthat such procedures are necessarily inequitable. Perhaps in this arena, there will bereasonable disagreement over which principles best express equity concerns. If so, theremight be good reason to appeal to further considerations (say, the problems of establishinga robust institutional framework for regulatory science) which might suggest good reason totreat “full scientific certainty” as the gold standard for admitting claims about risks intopolicy-making. This is, I admit, a result which is unlikely to impress many defenders of theprecautionary principle. However, let me finish by noting three appealing features of myapproach to understanding the “lack of full scientific certainty” clause.

First, my arguments do not imply the implausible claim that adherence to a highstandard-of-proof is always problematic, either from an epistemic or an ethical perspective.For example, in some contexts—such as in a scientific laboratory or in a criminal trial—there might be excellent reasons to adhere to very high standards-of-proof. What I havesuggested is that in certain contexts, insisting on a high standard-of-proof may beunjustifiable; this is useful as a defence against the claim that precautionary policy-makingin general reflects an unwarranted mistrust of science.24 Second, there is already muchdebate over how precisely we should set “standards of proof” in the regulatory context ifwe deny the importance of “full scientific certainty”.25 What I hope to have done is toprovide a non-consequentialist framework within which we can understand such debates.

Third, even defenders of CBA sometimes allow that we need some kind of decision-procedure for deciding what to do in circumstances of “uncertainty”, cases where we cannotassign determinate probabilities to any of the possible outcomes of our action. In suchcases, it is frequently claimed that a maxi-min rule should be adopted. Adoption of such amaxi-min rule is often claimed to be an application of precautionary thought. As I noted inSection 1 above, this is the kind of strategy used by Stephen Gardiner. I hope to havesuggested that it is a mistake to think of the precautionary principle as simply a proposal asto how to reason under circumstances of uncertainty. Rather, even in situations of“epistemic transparency”, we might have reason to adopt precautionary measures.Furthermore, the very distinction between decision-making under risk and under

24 Compare Sunstein, “a large goal of cost-benefit analysis is to increase the role of science in riskregulation” (Sunstein 2002 108).25 For practical suggestions along these lines, see the essays in Harremoes et al. 2002.

16 S. John

uncertainty is problematic. A proposal such as Gardiner’s overlooks the fact that were we toadopt different epistemic standards, circumstances of uncertainty might become circum-stances of risk.26

4 Conclusion

Kip Viscusi has written that “within the highly charged political context of policydevelopment, it is almost always possible to conceive of some notion of risk equity tojustify even the most inefficient policy interventions” (Viscusi 2000, 845). In this paper, Ihave argued that Viscusi is right, but that this is no objection to talk of equity in the contextof risk. Rather, I propose that environmental policies should be “irrational”, as judged bythe standards of efficiency, and should even be based on “bad science”.

Acknowledgments I am extremely grateful to Katherine Angel, Jo Burch-Brown, Karsten Klint-Jensen,Tim Lewens, Serena Olsaretti, Onora O'Neill, Martin Peterson, Per Sandin and Jo Wolff for extremely usefulcomments on some of the arguments in this paper. I also owe a particular debt of gratitude for manydiscussions on this topic to Charlotte Goodburn.

References

Anderson E (1993) Value in ethics and economics. Harvard University Press, Cambridge, MassCopp D (1985) Morality, reason and management science: the rationale of cost benefit analysis. Soc Philos

Policy 2:128–152Copp D (1987) The justice and rationale of cost-benefit analysis. Theor Decis 23:65–87Cranor C (1993) Regulating toxic substances. Oxford University Press, OxfordCranor C (2007) Towards a non-consequentialist approach to acceptable risks. In: Lewens T (ed) Risk:

philosophical perspectives. Routledge, LondonDasgupta P (2001) Human well-being and the natural environment. Oxford University Press, OxfordDercon S (2004) Introduction. In: Dercon S (ed) Insurance against poverty. Oxford University Press, OxfordFisher E (2001) Is the precautionary principle justiciable? J Environ Law 2001(13):315–334French P (1984) Collective and corporate responsibility. Columbia University Press, New YorkGardiner S (2006) A Core Precautionary Principle. J Polit Philos 14:33–60Hacking I (1975) The emergence of probability. Cambridge University Press, CambridgeHacking I (1990) The taming of chance. Cambridge University Press, CambridgeHansson S-O (1998) Setting the Limit. Occupational health standards and the limits of science. Oxford

University Press, OxfordHansson S-O (2003) Ethical criteria of risk acceptance. Erkenntnis 59:291–309Hansson S-O (2006) Economic (ir) rationality in risk analysis. Econ Philos 22:231–241Hansson S-O (2007) Philosophical problems in Cost Benefit Analysis. Econ Philos 23:163–183Harremoes P et al (2002) The precautionary principle in the 20th century: late lessons from early warnings.

Earthscan, LondonHourdequin M (2007) Doing, allowing, and precaution. Environ Ethics 29:339–58Hubin D (1994) The moral justification of benefit/cost analysis. Econ Philos 10:169–194Hughes J (2006) How not to criticise the precautionary principle. J Med Philos 31Jasanoff S (1990) The fifth branch. Harvard University Press, London

26 Furthermore, Gardiner claims that identifying “realistic” threats under uncertainty will employ “thickconcepts”. However, this implies that reliance on scientific testing assumes only thin concepts. The problemraised above is that we treat the norms of standard scientific testing as if they were thin, when they reflect avalue judgment.

In defence of bad science and irrational policies 17

John SD (2007) How to take deontological concerns seriously in risk-cost-benefit analysis: a re-interpretationof the precautionary principle. J Med Ethics 33:221–224

Korsgaard C (1996) The sources of normativity. Cambridge University Press, CambridgeLenman J (2000) Preferences in their place. Environ Values 9:431–451Lenman J (2008) Contractualism and risk imposition. Polit Philos Econ 7(1):99–122Manson N (2002) Formulating the precautionary principle. Environ Ethics 24:263–274Mellor DH (2005) Probability: a philosophical introduction. Routledge and Kegan Paul, LondonO’Neill J (1993) Ecology, policy and politics: human well-being and the natural world. Routledge, New

York-LondonO’Neill O (1996) Towards justice and virtue. Cambridge University Press, CambridgeO’Neill O (2002) Autonomy and trust in bioethics. Cambridge University Press, CambridgeO’Riordan T, Cameron J. (eds) (1994) Interpreting the precautionary principle. Cameron May, LondonPogge T (2004) Relational conceptions of justice. In: Anand F, Peter F, Sen AK (eds) Public health, ethics

and equity. Oxford University Press, Oxford, pp 135–162Raffensberger C, Tickner J (1999) Introduction: to foresee and forestall. In: Raffensberger C, Tickner J (eds)

Protecting public health and the environment: implementing the precautionary principle. Island,Washington, D.C, pp 1–11

Sandin P (2007) Common sense precaution and varieties of the precautionary principle. In: Lewens T (ed)Risk: philosophical perspectives. Routledge, London

Sandin P et al (2002) Five charges against the precautionary principle. J Risk Res 5:287–299Scanlon T (1998) What we owe to each other. Harvard University Press, Cambridge, MASchmidtz D (2001) A place for cost-benefit analysis. Philos Issues 11:148–171Shrader-Frechette K (1995) Practical ecology and foundations for environmental ethics. J Philos 92(12):621–

635Sunstein C (2002) Risk and reason. Cambridge University Press, CambridgeSunstein C (2005) Laws of fear. Cambridge University Press, CambridgeSunstein C (2003) Beyond the Precautionary Principle. Univ PA Law Rev 1003:1003–1058Sunstein C (2007) Moral heuristics and risk. In: Lewens T (ed) Risk: philosophical perspectives. Routledge,

LondonThompson A (2006) Environmentalism, moral responsibility, and the doctrine of doing and allowing. Ethics

Place Environ 9(3):269–278Trouwborst A (2002) Evolution and status of the precautionary principle in international law. Kluwer Law

International, The HagueUnited Nations General Assembly (2002) Rio Declaration on Environment and Development. Report of the

United Nations Conference on Environment and Development Rio de Janeiro, 3–14 June 1992. A/CONF.151/26. Vol I. New York: UN

Viscusi K (2000) Risk Equity. J Legal Stud 29(2):843–872Wiener J, Stern J (2006) Precaution against terrorism. J Risk Res 9:393–447Wildavsky A (1997) But is it true?. Harvard University Press, Cambridge, MassWingspread Statement (1998) “The precautionary principle”. Rachel’s environment and health weekly 586

February 19, 1998, accessed via http://www.psrast.org/precaut.htm (June 202h 2008)

18 S. John