Frewer Media Genetically

download Frewer Media Genetically

of 23

Transcript of Frewer Media Genetically

  • 8/14/2019 Frewer Media Genetically

    1/23

    Risk Analysis, Vol. 28, No. 5, 2008 DOI: 10.1111/j.1539-6924.2008.01086.x

    An Integrated Approach to Oversight Assessmentfor Emerging Technologies

    Jennifer Kuzma, Jordan Paradise, Gurumurthy Ramachandran, Jee-Ae Kim,Adam Kokotovich, and Susan M. Wolf

    Analysis of oversight systems is often conducted from a single disciplinary perspective andby using a limited set of criteria for evaluation. In this article, we develop an approach thatblends risk analysis, social science, public administration, legal, public policy, and ethical per-spectives to develop a broad set of criteria for assessing oversight systems. Multiple methods,including historical analysis, expert elicitation, and behavioral consensus, were employed todevelop multidisciplinary criteria for evaluating oversight of emerging technologies. Sixty-sixinitial criteria were identied from extensive literature reviews and input from our WorkingGroup. Criteria were placed in four categories reecting the development, attributes, evolu-tion, and outcomes of oversight systems. Expert elicitation, consensus methods, and multi-disciplinary review of the literature were used to rene a condensed, operative set of criteria.Twenty-eight criteria resulted spanning four categories: seven development criteria, 15 at-tribute criteria, ve outcome criteria, and one evolution criterion. These criteria illuminatehow oversight systems develop, operate, change, and affect society. We term our approachintegrated oversight assessment and propose its use as a tool for analyzing relationshipsamong features, outcomes, and tradeoffs of oversight systems. Comparisons among histori-

    cal case studies of oversight using a consistent set of criteria should result in defensible andevidence-supported lessons to guide the development of oversight systems for emerging tech-nologies, such as nanotechnology.

    KEY WORDS: Expert elicitation; multicriteria decision analysis; multidisciplinary; nanotechnology;oversight assessment; risk

    1. INTRODUCTION

    U.S. approaches to oversight of research andtechnology have developed over time in an effort toensure safety for humans, animals, and the environ-ment; to control use in a social context; and, on oc-casion, to promote innovation. In modern times, reg-ulatory and oversight tools have evolved to include

    Address correspondence to Jennifer Kuzma, Center for Sci-ence, Technology, and Public Policy, Hubert H. HumphreyInstitute, University of Minnesota, 301-19th Ave. S., Min-neapolis, MN 55455, USA; tel: 612-625-6337; fax: 612-625-3513;[email protected].

    diverse approaches such as performance standards,tradable allowances, consultations between govern-ment and industry, and premarket safety and efcacyreviews (Wiener, 2004; Davies, 2007). The decisionwhether to impose an oversight system, the oversightelements, the level of oversight (e.g., federal, state,local), the choice of approach (e.g., mandatory orvoluntary), and its execution can profoundly affecttechnological development, individual and collectiveinterests, and public trust and attitudes toward tech-nological products (Rabino, 1994; Zechendorf, 1994;Siegrist, 2000; Cobb & Macoubrie, 2004; Macoubrie,2005, 2006). Oversight is conducted by a range of institutions with various capabilities, cultures, and

    1197 0272-4332/08/0100-1197$22.00/1 C 2008 Society for Risk Analysis

  • 8/14/2019 Frewer Media Genetically

    2/23

  • 8/14/2019 Frewer Media Genetically

    3/23

    An Integrated Approach to Oversight Assessment 1199

    particular discipline (e.g., U.S. EPA, 1983; OTA,1995; Davies, 2007). Yet, oversight affects multiplestakeholders with various viewpoints, values, andconcerns and should pass muster from policy, le-gal, economic, ethical, and scientic perspectives.

    Some stakeholders are most concerned about eco-nomic impacts or job opportunities that result orare lost with technological adoption. Others careprimarily about the health and environmental im-pacts of new products. Most consumers value param-eters that affect their daily life, such as improvedhealth, lower costs, better local environments, con-venience, and quality. Government regulators oftenfocus on health risks, costs, and benets (U.S. EPA,1983; White House, 1993, as amended, 2007). Froma global perspective, there are emerging concernsthat risks and benets of technological products befairly distributed within and among nations (Singeret al ., 2005). From an ethical perspective, evalua-tion of emerging technologies may raise issues of conict with moral principles or values and ques-tions of whether the oversight process respects them(Walters, 2004). Although not every group or indi-vidual viewpoint can be accommodated in an over-sight system, in a democracy such as the UnitedStates, an oversight system should respond to arange of viewpoints, values, and concerns (Jasanoff,1990; Wilsdon & Willis, 2004; MacNaghten et al .,2005).

    Some groups are making progress in integratingcriteria for analysis of oversight frameworks in sys-tematic ways. For example, the Fast EnvironmentalRegulatory Tool (FERET) was designed to evaluateregulatory options via a computerized template tostructure the basic integration of impacts and val-uations, provide a core survey of the literature, in-corporate uncertainty through simulation methods,and deliver a benet-cost analysis that reports quan-titative impacts, economics values, and qualitative el-ements (Farrow et al ., 2001, p. 430). FERET ad-dresses distributional issues of who bears the costsand who receives the benets of oversight, and uses

    sophisticated modeling techniques to incorporate un-certainty. However, it does not account for other im-portant attributes of oversight systems, for example,those that affect public condence, legitimacy, devel-oper satisfaction, and technology development.

    In contrast, a more qualitative oversight eval-uation method is used in an article on the delib-erations of the consortium to examine clinical re-search ethics (Emanuel et al ., 2004). This diverseexpert group used qualitative and normative ap-

    proaches to identify 15 problems in the oversightof research involving human participants. It thensorted them into three categoriesstructural, proce-dural, and performance assessment problemsandevaluated whether proposed reforms would address

    those challenges. Identied problems in the over-sight of human subjects research included the abil-ity of the oversight system to be consistent, ex-ible, manage conicts of interest, and provide foradequate education of participants (Emanuel et al .,2004). FERETs highly quantitative model at oneextreme, and the Consortiums qualitative expertgroup consensus model at the other, show a range of approaches to evaluating oversight. However, the re-cent literature examining emerging technologies es-tablishes that technology governance requires collab-oration among scientists, government, and the public(Wiek et al ., 2007). This suggests that oversight as-sessment should use a broad range of criteria that ad-dresses the concerns of multiple stakeholders.

    2.2. Basis of Our Approach

    The goal of this study was to develop a multidis-ciplinary approach to more comprehensively evalu-ate oversight systems for emerging technologies. Inour work, we dene oversight broadly, as watch-ful or responsible care that can include regula-tory supervision, or nonregulatory and voluntaryapproaches or systems (Kuzma, 2006). Our IOAapproach is based in part upon multicriteria deci-sion analysis (MCDA). MCDA relies on the notionthat no single outcome metric can capture the ap-propriateness or effectiveness of a system, allowsfor integrating heterogeneous information, and en-ables incorporation of expert and stakeholder judg-ments (reviewed in Belton & Stewart, 2002). MCDArefers to a range of approaches in which multiplecriteria are developed, ranked, and used to com-pare alternatives for decision making. General cat-egories of criteria have been described, such asutility-based criteria (focusing on cost, risk-benet

    comparison, and outcomes), rights-based criteria (fo-cusing on whether people have consented to risk andtheir rights are being respected), and best availabletechnology-based criteria (focusing on using the besttechnologies available to reduce risk to the extentpossible) (Morgan & Henrion, 1990a). MCDA canbe descriptively useful to better understand systems,stakeholder and expert views, and multiple perspec-tives on decisions. However, its normative utility islimited because its ability to predict or recommend

  • 8/14/2019 Frewer Media Genetically

    4/23

    1200 Kuzma et al .

    Fig. 1. Integrated oversight assessment (IOA) methodology. The IOA approach combines multicriteria decision analysis, quantitative andqualitative analysis, and historical literature analysis, as described in this article.

    the best decision or approach is unclear (Morgan &

    Henrion, 1990a).MCDA has been used recently to evaluate

    strategies for risk management (e.g., remediatingenvironmental hazards such as oil spills) (Linkovet al ., 2006, 2007a). An MCDA approach was re-cently used to evaluate oversight approaches to threehypothetical nanomaterials by eliciting criteria andweightings from scientists and managers (Linkovet al ., 2007b). Criteria used were health and eco-logical effects, societal importance, and stakeholderpreference, and these were weighted according totheir importance. However, since the products were

    hypothetical, the criteria were broad and few, andthe authors of the study ranked the criteria them-selves, the results were limited to demonstrating howMCDA could be applied to decisions about the en-vironmental health and safety of nanomaterials. Toour knowledge, MCDA has neither been appliedto broader oversight policy questions for emergingtechnologies, nor has it incorporated comprehensivesets of criteria that address intrinsic values, rights,and fairness, as well as utilitarian outcomes.

    Expanding on the general framework of MCDA,

    we employ several methods to devise criteria forevaluating and describing oversight systems, includ-ing review of the relevant literature, historical anal-ysis, group consensus, and quantitative expert andstakeholder elicitation (Fig. 1). Fields of public policy(including social science and risk policy), social sci-ence, law, and ethics were considered to develop sev-eral types of criteria, including those relating to eco-nomics, social science, safety, values, and impacts ontechnology research and development. In our IOAapproach, we consider viewpoints of different actorsand stakeholders; the role of various programs, poli-cies, and decisions; and diverse types of impacts. Thebreadth of our approach resulting from the integra-tion of multiple disciplines, literatures, and method-ologies makes it challenging. It is a more compre-hensive approach than many others, but it is notexhaustive in nature. We acknowledge that it haslimitations that stem in part from its reliance onliterature analysis and the views of experts and stake-holders. Broad citizen surveys or public engage-ment exercises were not directly included. However,

  • 8/14/2019 Frewer Media Genetically

    5/23

    An Integrated Approach to Oversight Assessment 1201

    the public concerns and attitudes are representedin the literature we used to develop and rene thecriteria. Despite the limitations, the multiple disci-plines and methods employed in IOA make it aunique approach that is well equipped to understand

    many dimensions of oversight systems, depict thecomplexity of oversight, and aid in the design andimplementation of more viable and robust systems.Our overall methodology is depicted in Fig. 1 and de-scribed in more detail in the following sections.

    3. DEVELOPING AND CATEGORIZINGCRITERIA FOR OVERSIGHT ASSESSMENT

    We developed our criteria through a multistageprocess by drawing upon the literature, conductinghistorical analysis, and using stakeholder and expert

    elicitation and consensus. In order to represent mul-tiple disciplines in our assessment of oversight sys-tems, we were inclusive in choosing criteria for initialconsideration. Criteria characterized how the over-sight system developed, its attributes, evolution, andoutcomes. At this stage, we use the term criteriabroadly as descriptive or evaluative and do not as-sume which criteria are able to predict good over-sight or outcomes that a majority would believe to bepositive (e.g., positive environmental impacts). How-ever, because we were guided in our choice of crite-ria by what experts, stakeholders, and citizens value,many are likely to prove to be normatively impor-tant for assessing whether oversight is effective orappropriate. Future work and publications compar-ing across six historical case studies (gene therapy,genetically engineered organisms in the food supply,human drugs,medical devices, chemicals in the work-place, and chemicals in the environment) should al-low us to identify what criteria are predictive of suc-cessful and appropriate oversight (Fig. 1). Judgingwhich criteria fall into which category is not straight-forward at this point, as we do not know which inde-pendent or descriptive variables will impact the de-pendent or evaluative ones (e.g., outcome criteria)

    most positively for oversight until the criteria are de-ployed across the case studies (see Section 5).Initially, 66 criteria were identied from support-

    ing literature (Table I and Appendix A). 1 Searcheswere conducted using a variety of databases and re-sources 2 to rigorously research the legal, ethics, and

    1 Appendix A is available in the online version of the article.2 Resources relevant to legal analysis of oversight include statutes,

    legal cases, the Congressional Record , the Federal Register ,

    public policy literature regarding oversight, includ-ing materials on criteria utilized in oversight analy-sis. We strove to set up a methodology that wouldbe conducive to generating and testing hypothesesabout oversight systems. In order to probe relation-

    ships between features of oversight systems and im-portant outcomes of them in future work (i.e., inapplication of the criteria to historical case stud-ies, see Fig. 1), we categorized criteria into fourgroupsthose associated with the initial develop-ment of the system (e.g., establishment of policies,procedures, or regulations); the attributes of the sys-tem (e.g., how the system operates for particularprocesses or decisions); the outcomes of the sys-tem (e.g., social, economic, cultural, health, environ-mental, and consumer impacts); and evolution of the system (e.g., changes to the development, at-tributes, or outcomes over time). We suspect thatcriteria within and among categories interrelate andaffect each other. For example, we have hypothe-sized that the way in which the oversight mecha-nism develops and its attributes are related to out-comes such as public condence, health, and envi-ronmental impacts, and economic effects on indus-try and stakeholders. Outcomes then spur further de-velopment and change attributes over time as thesystem evolves. Below, we discuss some of the sup-porting literature and resulting criteria also listed inTable I.

    Economic criteria for evaluating oversight sys-tems are prominent in the literature. For example,in the United States, the federal government of-ten formally evaluates oversight systems based on

    agency rulings, bill history, and law review articles. Databasescommonly utilized for legal analysis include both electronic(e.g., Lexis/Nexis, Westlaw, Thomas U.S. Government, Find-Law, Legal Research Network, LegalTrac) and hard-copy re-sources. Databases commonly used for ethics analysis, espe-cially bioethics, include the electronic websites Medline, Med-BioWorld, ISI Web of Science, Academic Search Premiere,Lexis/Nexis, Westlaw, as well as bioethics-specic journals suchas the American Journal of Bioethics ; Journal of Law, Medicine& Ethics ; American Journal of Law & Medicine ; Hastings Cen-ter Report ; Health Matrix ; Kennedy Institute of Ethics Journal ;and Yale Journal of Health Policy, Law & Ethics . Databasesfor public policy research are diverse, including Ingenta, GoogleScholar, JSTOR, Current Contents, PAIS (Public Affairs Infor-mation Service), Pub Med, Agricola, and PolicyFile, as well asspecic journals such as RiskAnalysis , Issues in Science andTech-nology , Science , Nature , Nature Biotechnology , Environmental Health and Safety , Journal of Applied Economics , Science andPublic Policy, Technology in Society, International Journal of Technology Policy and Law , and Journal of Policy Analysis andManagement .

  • 8/14/2019 Frewer Media Genetically

    6/23

    1202 Kuzma et al .

    Table I. List of Initial Criteria for Oversight Assessment and Supporting Literature

    Guiding Question Supporting Literature

    Development d1 Impetus What were the driving forces? Davies (2007)

    d2 Clarity of technologicalsubject matter

    Are the technologies, processes, and products to beoverseen well dened?

    Davies (2007)

    d3 Legal grounding How explicit are the statutes or rules on which theoversight framework is based?

    Is it clear that the decisionmakers in the frameworkhave legal authority for the actions they proposedat the time? Is there grounding in existing laws?

    Fischoff et al . (1981); Slovic (1987); Jasanoff (1990); Porter (1991); OTA (1995); Freweret al . (1996); NRC (1996); Jaffe & Palmer(1997); Siegrist (2000); Ogus (2002); Cobband Macoubrie (2004); Frewer et al . (2004);Jasanoff (2005); Macoubrie (2005); Davies(2007) & Siegrist et al. (2007a, 2007b)

    d4 Federal authority How strong was the authority for federal actors? Fischoff et al ., 1981; Slovic, 1987; Jasanoff, 1990,2005; Frewer et al ., 1996, 2004; NRC, 1996;Siegrist, 2000; Cobb & Macoubrie, 2004;Macoubrie, 2005; Davies, 2007; Siegrist et al .,2007a, 2007b

    d5 Industry authority How strong was the authority for industry actors? Einsiedel & Goldenberg, 2004; Stewart &McLean, 2004; Thompson, 2007d6 Loci How many loci of authority (e.g., industry,

    government, nonprot, developers, scientists,clinicians) for oversight were included in thedevelopment stages?

    Davies, 2007

    d7 Stakeholder input Was there a process or opportunities for stakeholdercontribution to discussions or decisions about onwhat the system is based, how it operates, or how itis structured?

    Jasanoff, 1990; Einsiedel & Goldenberg, 2004;Stewart & McLean, 2004; Thompson, 2007

    d8 Breadth of input To what extent were groups of citizens andstakeholders from all sectors of society encouragedto provide input to decisionmakers who devisedthe oversight framework?

    Were some groups missing?

    Fischoff et al ., 1981; Slovic, 1987; Jasanoff, 1990,2005; Frewer et al ., 1996, 2004; NRC, 1996;Siegrist, 2000; Cobb & Macoubrie, 2004;Einsiedel & Goldenberg, 2004; Stewart &

    McLean, 2004; Macoubrie, 2005; Siegrist et al. ,2007a, 2007b; Thompson, 2007d9 Opportunity for value

    discussionsHow many and what kinds of opportunities did

    stakeholders and citizens have to bring upconcerns about values or nontechnical impacts?

    Fischoff et al ., 1981; Slovic, 1987; Jasanoff, 1990,2005; Frewer et al ., 1996, 2004; NRC, 1996;Beauchamp & Walters, 1999; Siegrist, 2000;Cobb & Macoubrie, 2004; Einsiedel &Goldenberg, 2004; Emanuel et al ., 2004;Stewart & McLean, 2004; Macoubrie, 2005;Siegrist et al. , 2007b; Thompson, 2007

    d10 Transparency Were options that the agencies or otherdecision-making bodies were considering knownto the public?

    Were studies about the pros and cons of theseoptions available?

    Fischoff et al ., 1981; U.S. EPA, 1983; Slovic,1987; Jasanoff, 1990, 2005; White House,1993, amended 2007; Frewer et al ., 1996;NRC, 1996; Siegrist, 2000; Cobb &Macoubrie, 2004; Frewer et al ., 2004;Macoubrie, 2005; Siegrist et al. , 2007a, 2007b

    d11 Financial resources How sufcient were the funds provided to thedevelopers of the framework?

    Emanuel et al ., 2004; OTA, 1995; Davies, 2007

    d12 Personal education andtraining

    How trained or educated were actors during thedevelopment stage of oversight?

    Emanuel et al ., 2004; Davies, 2007

    d13 Empirical basis To what extent was scientic or other objectiveevidence used in designing the review or oversightprocess central to the framework?

    Davies, 2007

    (Continued )

  • 8/14/2019 Frewer Media Genetically

    7/23

    An Integrated Approach to Oversight Assessment 1203

    Table I. Continued.

    Guiding Question Supporting Literature

    Attributesa1 Legal grounding How explicit are the statutes or rules on which

    specic decisions within the oversight frameworkare based?Is it clear that the decisionmakers in the framework

    have legal authority for the actions they propose?

    Fischoff et al ., 1981; Slovic, 1987; Greene,

    1990; Jasanoff, 1990, 2005; Frewer et al .,1996, 2004; NRC, 1996; Cole & Grossman,1999; Siegrist, 2000; Cobb & Macoubrie,2004, 2005; Davies, 2007; Siegrist et al. ,2007a, 2007b

    a2 Data requirement How comprehensive are the safety and other studiesrequired for submittal to authorities?

    If the system is voluntary, how comprehensive arethe data that are generated and available forreview prior to decisions about release orapproval?

    Fischoff et al ., 1981; U.S. EPA, 1983; Slovic,1987; Greene, 1990; Jasanoff, 1990, 2005;Frewer et al ., 1996, 2004; NRC, 1996; Cole& Grossman, 1999; Siegrist, 2000; Cobb &Macoubrie, 2004; Macoubrie, 2005;Davies, 2007; Siegrist et al. , 2007a, 2007b

    a3 Treatment of uncertainty

    Is uncertainty accounted for qualitatively orquantitatively in data and study submissions?

    Fischoff et al ., 1981; Slovic, 1987; Jasanoff,1990, 2005; Frewer et al ., 1996, 2004; NRC,1996; Siegrist, 2000; Cobb & Macoubrie,2004; Macoubrie, 2005; Davies, 2007;Siegrist et al. , 2007a, 2007b

    a4 Stringency of system Is the system mandatory or voluntary? Fischoff et al ., 1981; Slovic, 1987; Greene,1990; Jasanoff, 1990, 2005; Porter, 1991;OTA, 1995; Frewer et al ., 1996, 2004;NRC, 1996; Jaffe & Palmer, 1997; Cole &Grossman, 1999; Siegrist, 2000; Ogus,2002; Cobb & Macoubrie, 2004, 2005;Davies, 2007; Siegrist et al. , 2007a, 2007b

    a5 Empirical basis To what extent is scientic or other objectiveevidence used in making decisions about specicproducts, processes, or trials?

    Fischoff et al ., 1981; Slovic, 1987; Jasanoff,1990, 2005; Frewer et al ., 1996, 2004; NRC,1996; Siegrist, 2000; Cobb & Macoubrie,2004; Macoubrie, 2005; Davies, 2007;Siegrist et al. , 2007a, 2007b

    a6 Compliance and

    enforcement

    To what extent does the system ensure compliance

    with legal and other requirements and to whatextent can it prosecute or penalize noncompliers?

    Fischoff et al ., 1981; Slovic, 1987; Greene,

    1990; Jasanoff, 1990, 2005; OTA, 1995;Cole & Grossman, 1999; Frewer et al .,1996, 2004; NRC, 1996; Siegrist, 2000;Cobb & Macoubrie, 2004; Macoubrie,2005; Davies, 2007; Siegrist et al. , 2007a,2007b

    a7 Incentives Are the stakeholders in the system encouraged toabide by the requirements of the system?

    Davies, 2007

    a8 Treatment of intellectual propertyand proprietaryinformation

    How does condential business information, tradesecrets, or intellectual property get treated inapplications for approval?

    NRC, 2000; PIFB, 2003a; Einsiedel &Goldenberg, 2004; Stewart & McLean,2004; Davies, 2007; Thompson, 2007

    a9 Institutional structure How many agencies or entities with legal authorityare involved in the process of decision making

    within the framework?

    Porter, 1991; OTA, 1995; Jaffe & Palmer,1997; Ogus, 2002; Davies, 2007

    a10 Feedback loop Can something discovered in later phases of productreview be used to improve early review stages forother, the same, or modied products/trials in thefuture?

    OTA, 1995; Emanuel et al ., 2004; Davies,2007

    (Continued )

  • 8/14/2019 Frewer Media Genetically

    8/23

  • 8/14/2019 Frewer Media Genetically

    9/23

    An Integrated Approach to Oversight Assessment 1205

    Table I. Continued.

    Guiding Question Supporting Literature

    a23 Conict of interest Do independent experts conduct or review safetystudies? Are conicts of interest disclosed

    routinely?

    Einsiedel & Goldenberg, 2004; Emanuel et al .,2004; Stewart & McLean, 2004; Davies, 2007;

    Thompson, 2007a24 Conict of views How are conicting views handled in the reviewof products, processes, and trials?

    a25 Economic costs andbenet considered

    What role does cost-benet analysis play inapprovals?

    U.S. EPA, 1983; OTA, 1995

    a26 Accountability andliability

    Is there a fair and just system for addressingproduct or trial failures with appropriatecompensation to affected parties and/orenvironmental remediation?

    Davies, 2007

    a27 Education of decisionmakers,stakeholders

    To what extent does the system make efforts toeducate the interested and affected parties, aswell as the decisionmakers?

    Emanuel et al ., 2004; Davies, 2007

    a28 Informed consent To what extent does the system supply theamount and type of information so that peoplecan make informed decisions about what theywill accept?

    Fischoff et al ., 1981; Slovic, 1987; Jasanoff, 1990,2005; Frewer et al ., 1996, 2004; NRC, 1996;Beauchamp & Walters, 1999; Siegrist, 2000;Cobb & Macoubrie, 2004, 2005; Siegrist et al. ,2007a, 2007b

    a29 Internationalharmonization

    How well does the system match up with othersystems around the world?

    Newell, 2003

    Evolutione1 Extent of change in

    attributesTo what extent has the system changed over

    time?OTA, 1995

    e2 Distinguishable periodsof change

    Can separable periods in the oversight history bedistinguished?

    e3 Extent of change inattributes

    To what extent have the systems attributeschanged over time?

    OTA, 1995

    e4 Change in stakeholdersatisfaction

    To what extent have stakeholder opinionschanged during the evolution of the system?

    e5 Public condence To what extent have public opinions changedduring the evolution of the system?

    OTA, 1995

    Outcomeso1 Product safety What is the number of adverse reports compared

    to the number of approvals?OTA, 1995

    o2 Time and costs formarket approval

    How long does it take and how much does it costfor approval?

    U.S. EPA, 1983; OTA, 1995; Emanuel et al ., 2004

    o3 Recalls What is the number of recalls compared to thenumber of approvals?

    OTA, 1995; Davies, 2007

    o4 Stakeholder satisfaction How well do stakeholders and experts regard thesystem?

    Beauchamp & Walters, 1999

    o5 Public condence What do the public or citizens think about thesystem?

    Porter, 1991; Rabino, 1994; OTA, 1995; Feweret al ., 1996; Siegrist, 2000; Macoubrie, 2005,2006How about disadvantaged, special, or susceptible

    populations?o6 Effects on social groups Are the net effects of approvals positively

    affecting the vast majority of social groups?Jasanoff, 2005

    o7 Cultural effects Are the net effects of approvals positivelyaffecting people and their cultures?

    OTA, 1995; Beauchamp & Walters, 1999;Jasanoff, 2005

    o8 Research impacts Has the system enhanced and supportedresearch either on environmental health andsafety or in the development of products?

    Porter, 1991; OTA, 1995; Jaffe & Palmer, 1997;Ogus, 2002

    (Continued )

  • 8/14/2019 Frewer Media Genetically

    10/23

    1206 Kuzma et al .

    Table I. Continued.

    Guiding Question Supporting Literature

    o9 Innovation Has the system led to more innovation in the eld orstied it?

    U.S. EPA, 1983; Greene, 1990; Porter, 1991;OTA, 1995; Cole & Grossman, 1999;

    Rabino, 1994; Jaffe & Palmer, 1997;Siegrist, 2000; Ogus, 2002; Macoubrie,2005, 2006

    o10 Health Does the oversight system impact health in posit iveways?

    U.S. EPA, 1983; White House, 1993, asamended, 2007; Beauchamp & Walters,1999

    o11 Distributional healthimpacts

    Are the health impacts equitably distributed?Is there an inequitable impact on specic social or

    disadvantaged groups?

    U.S. EPA, 1983; White House, 1993; OTA,1995; Beauchamp & Walters, 1999

    o12 Environmental impacts Does the oversight system impact the environment inpositive ways?

    U.S. EPA, 1983; Greene, 1990; White House,1993; OTA, 1995; Cole & Grossman, 1999

    o13 Nonindustry economicimpacts

    How does the system impact nonindustrystakeholder groups economically?

    U.S. EPA, 1983; OTA, 1995; Beauchamp &Walters, 1999; Jasanoff, 2005

    o14 Effects on bigcorporations

    How are big companies doing, nancially andotherwise, as a result of the system?

    U.S. EPA, 1983; Porter, 1991; Rabino, 1994;OTA, 1995; Jaffe & Palmer, 1997; Siegrist,

    2000; Ogus, 2002; Newell, 2003;Macoubrie, 2005, 2006

    o15 Effects on small- tomedium-sizedenterprises (SMEs)

    Are SMEs disadvantaged as a result of the oversightsystem? Are they suffering?

    U.S. EPA, 1983; Porter, 1991; Rabino, 1994;OTA, 1995; Jaffe & Palmer, 1997; Siegrist,2000; Ogus, 2002; Newell, 2003;Macoubrie, 2005, 2006

    o16 Economic development Does the approval of the products or trials improvethe overall economic situation of the nation?

    U.S. EPA, 1983; Porter, 1991; Rabino, 1994;OTA, 1995; Jaffe & Palmer, 1997; Siegrist,2000; Ogus, 2002; Macoubrie, 2005, 2006

    o17 Global competitivenessfor the United States

    Does the oversight system disadvantage the UnitedStates in the global marketplace?

    U.S. EPA, 1983; OTA, 1995; Newell, 2003

    o18 Distributionaleconomic impacts

    Does the approval of the products or trials improvethe economic situation of rural or developingworld citizens?

    U.S. EPA, 1983; Porter, 1991; OTA, 1995;Jaffe & Palmer, 1997; Ogus, 2002

    o19 Proposals for change Have there been proposals for change resulting from

    the oversight system?

    OTA, 1995

    Note : Examples of supporting literature are listed for most of the initial 66 criteria. References to authors as supporting literature reect ourinterpretation of that literature. Although the authors of the literature referenced may not have explicitly stated that particular criterionfor evaluation of an oversight system, their work supports the importance of that criterion.

    costs and benets through regulatory impact assess-ment (RIA) and economic analyses (U.S. EPA, 1983,2000). Oversight systems often originate with statu-tory systems that are then detailed and implementedby regulatory agencies by formal notice and com-ment rule-making. RIA and economic analyses fo-cus on the benets and costs of proposed rules anddecisions made under those regulatory systems (U.S.EPA, 1983, 2000). Proposed rules often are key as-pects or implementations of oversight systems. Thus,cost effectiveness is embedded in several of our cri-teria, particularly those related to the attributes andoutcomes of the system (Table I and Appendix A, 3

    3 The letter-number combinations in parentheses refer to criteriain Table I and Appendix A throughout this section.

    e.g., a25, a2, a13, o2, o9, o13, o14, o15, o16, o17, o18).Executive Order 12,866 suggests somewhat broadercriteria, requiring that every new regulation be sub- jected not only to a cost-benet test, but also thatanalysis include: (1) evaluation of the adverse sideeffects of regulations on health and the environment,(2) qualitative assessment of distributional impacts,and (3) assurances of transparency (White House,1993, as amended, 2007). Based on these governmentdocuments, we included transparency in both the de-velopment and execution of oversight systems (d10,a22), the consideration of distributional health im-pacts (o11), and health and environmental impacts(o10, o12).

    Other criteria for the analysis of oversight sys-tems have been described as part of MCDA (Morgan

  • 8/14/2019 Frewer Media Genetically

    11/23

    An Integrated Approach to Oversight Assessment 1207

    & Henrion, 1990a; Linkov et al. , 2006, 2007a, 2007b),but deal more broadly with the system as a wholeas opposed to particular decisions (e.g., OTA, 1995;Davies, 2007). The Ofce of Technology Assessmentused seven criteria to assess regulatory and nonregu-

    latory environmental policy tools and these appear insimilar forms in our criteria list (OTA, 1995): cost-effectiveness and fairness (a21, a25, o2, o11, o12,o13, o14, o15, o18), minimal demands on government(d11, a9, a16, a25, o18), assurance to the public thatenvironmental goals will be met (e5, o1, o5, o5, o12),prevention of hazards and exposure when possible(a4, a6, d3), consideration of environmental equityand justice issues (a21, o7, o11, o12), adaptation tochange (a15, a10, e1, e3, o19), and encouragement of technology innovation and diffusion (o1, o3, o8, o9,o14, o15, o16, o17, o18).

    The criteria we chose also overlap with manyof the criteria that Davies (2007) suggests for over-sight of nanotechnology by the EPA: incentives forindustry to do long-term testing (a1, a3, a4, a5, a7,a12, a26), monitoring capabilities (a10, a12,), legalauthority (d3, a1), empirical basis (d13, a5), resourcesfor agencies (d4, d11, d12, a16, a27), clarity of ma-terials to be regulated (d2, a1), recall authority (a4,a10, a12, o3), burden of proof on manufacturers(a2, a6, a26), data requirements (a2), prohibition of marketing (a1, a6), timely adverse events reporting(a10, a12), transparency in safety review (a8, a22),incentives for risk research (a2, a7, o8), proper insti-tutional structures (d6, a9, a14), power relationshipsand impacts on oversight (a14, a17, a23), and politicalwill (d1).

    Our oversight criteria also address the fact thatoversight systems can affect the competitivenessof the nation, particularly in the context of tradeand World Trade Organization (WTO) agreements(Newell, 2003). Trade can be affected in positiveor negative ways due to different standards or test-ing requirements. For example, U.S. grain exportershave lost hundreds of millions of dollars in tradewith the European Union (EU) because U.S. vari-

    eties of genetically engineered food crops not ap-proved in the EU are not segregated from other va-rieties in the United States (Paarlberg, 2002). Thesebroader economic considerations and internationalharmonization of oversight were also included in ourinitial list of 66 criteria (a29, o14, o15, o17). Someanalysts have hypothesized that mandatory regula-tory systems with clear standards can foster inno-vation, ultimately improving the economic perfor-

    mance of rms (Porter, 1991; Jaffe & Palmer, 1997;Ogus, 2002). Yet, other studies indicate that regula-tions can decrease research productivity, particularlyfor smaller rms (Thomas, 1990). In our criteria, thelegal grounding (d3), stringency of the system (a4),

    institutional structure (a9), economic impacts (o14,o15, o16, o18), and effects on research and innova-tion (o8, o9) were included to explore these rela-tionships in our historical case studies. There is alsoevidence that mandatory systems lead to better at-tributes and outcomes as far as compliance, innova-tion, and environmental impacts (e.g., Greene, 1990;Cole & Grossman, 1999). Criteria (a1, a2, a4, a6, o9,o12) were included to explore these relationships aswell.

    Several criteria relating to what oversight fea-tures citizens believe to be important were derivedfrom the public engagement and risk perception lit-erature. That literature shows that citizens may ap-preciate transparency (d10, a22), exercising rights toknow and choose (a22, a28), opportunities for mean-ingful input not limited to the quantitative risk (d8,d9, a18, a19, a20), and mandatory requirements forsafety testing and regulation (d3, d4, a1, a2, a3, a4,a5, a6) (Fischoff et al ., 1981; Slovic, 1987; Jasanoff,1990, 2005; Frewer et al ., 1996; NRC, 1996; Siegrist,2000; Cobb & Macoubrie, 2004; Frewer et al ., 2004;Macoubrie, 2005; Siegrist et al. , 2007a, 2007b). Rig-orous oversight can foster consumer or public con-dence and trust (o5) and ultimately the success of benecial technologies (o9, o14, o15, o16) (Porter,1991; Rabino, 1994; Siegrist, 2000; Macoubrie, 2005,2006).

    Other criteria were based on the social science,ethics, and science and technology studies litera-ture. For example, impacts from oversight systemsfor genetically engineered organisms have been doc-umented to include changes in industry structure,farmer relationships, and cultural systems (o6, o7,o13) (Jasanoff, 2005). Power relationships and trustare inuenced by the treatment of intellectual prop-erty (a8), the involvement of industry in decision-

    making and safety testing (d5, a23), and whetherthere are opportunities for wider public input (d8, d7,d9, a18, a19, a20). These factors may affect the pub-lic legitimacy of decisions made about new techno-logical products (e.g., Einsiedel & Goldenberg, 2004;Stewart & McLean, 2004; Thompson, 2007). Con-dential business information (CBI) and treatment of intellectual property (a8) have affected the abilityof stakeholders and scientists outside of industry to

  • 8/14/2019 Frewer Media Genetically

    12/23

    1208 Kuzma et al .

    access information about technological products be-fore, during, and after regulatory review; thus, we in-clude transparency among the criteria (a22) (PIFB,2003a; NRC, 2000). Transparency has been proposedas a precondition to public trust and condence (o5),

    although it is not itself sufcient for trust (Freweret al ., 1996). Also, transparency and public con-sultation (d7d9; a18a20) enhance the credibilityof oversight systems if input is considered care-fully by decisionmakers and not ignored (e.g., a21)(Jasanoff, 1990). Cash et al . (2003) suggest thatthe salience, credibility, and legitimacy of informa-tion produced from systems for managing bound-aries between knowledge creation and action (likeoversight systems) are enhanced by communica-tion, mediation, and translation among decision-makers, the public, and stakeholders during deci-sion making. Several of our criteria relate to theirideas, such as the inclusion of diverse stakeholdersand opportunities for public input at key juncturesin oversight systems (d7d10, a14, a17a22, a27a28).

    The ethics literature is reected in principles of equity, justice, rights to know and choose, and benef-icence or the minimization of harm (d9, a20, a21, a22,a28, o4, o7, o11, o13, o10, o13) (Beauchamp & Wal-ters, 1999). For clinical trial oversight systems, theability to address major ethical issues (d9, a20), doso in a timely manner (o2), manage conicts of in-terest (a23), educate decisionmakers (d12, a27), pro-vide sufcient resources (d11), report adverse eventsin a timely fashion (a12), and conduct formal assess-ments (a10) have been identied as key attributes(Emanuel et al ., 2004).

    Criteria within a category and among the fourcategories (i.e., development, attributes, evolution,and outcome) are not mutually exclusive. Given ourapproach to capture the evolution, operation, andadaptation of systems, there is some overlap in ourlist (Table I). For example, economic developmentoutcomes (o16) cannot be separated from effectson large corporations (o14). Similarly, health (o10)

    and environmental impacts (o12) often cannot befully distinguished (i.e., environment affects humanhealth). A given criterion may be reected in morethan one category. For example, transparency ap-pears both in the development and attributes cate-gory, reecting its importance both in establishingoversight systems and in making particular decisionsabout products or applications of technologies (d10,a22).

    4. EXPERT AND STAKEHOLDERELICITATION FOR IDENTIFYINGKEY CRITERIA

    The 66 criteria described in the previous sec-

    tion were too numerous to be analytically tractablefor future work on historical analysis of oversightfor emerging technologies. Thus, we assembled apanel of experts and stakeholders as a WorkingGroup to seek their input and consensus on cri-teria to be used in the six historical case studies(Fig. 1). The 12 Working Group members, by disci-plinary background, expertise, and type of afliation,respectively, included: cell biology, nanobiotechnol-ogy, academe; health policy, law, academe; medicine,biochemistry, small industry; business, food, large in-dustry; applied economics, regulation, academe; en-vironmental law, academe; regulatory policy, law,consumer organization; toxicology, public policy,nongovernmental organization (NGO); environmen-tal policy, sociology, academe; science communica-tion, sociology, academe; mechanical engineering,nanoparticles, academe; and engineering and publicpolicy, environmental policy, academe. The WorkingGroup agreed that it was necessary to rene the num-ber of criteria to a manageable set.

    The members of the Working Group all metseveral well-established conditions to qualify asexperts. These conditions include substantive con-tributions to the scientic literature (Wolff et al .,1990), status in the scientic community, member-ship on editorial committees of key journals (Siegelet al ., 1990; Evans et al ., 1994), membership on ad-visory boards, and peer nomination (Hawkins &Evans, 1989). The Working Group provided a varietyand balance of institutional perspectives. Some mem-bers represent stakeholder groups that are interestedin or affected by historical models of oversight ornanotechnology oversight. Most members have hadextensive experience with oversight systems and fed-eral regulatory frameworks in one or more of thesix areas and all have had some experience with

    them.The Working Group was assembled, presentedwith the criteria list derived from the literature(Table I; Appendix A available online), and askedto arrive at consensus on what criteria were im-portant for oversight assessment. The derivationof consensus among the panel members was ap-proached from two complementary angles: be-havioral and mathematical. Behavioral approaches

  • 8/14/2019 Frewer Media Genetically

    13/23

    An Integrated Approach to Oversight Assessment 1209

    generally rely on psychological factors and interac-tions among experts. Behavioral approaches are thedominant means of achieving consensus in bioethics,law, and public policy groups. In bioethics, for exam-ple, multidisciplinary dialogical consensus-building is

    standard, and federal and state committees and pro-fessional societies have long used this method togenerate consensus. Moreno (2004) describes a con-sensus process that has worked successfully in ad-dressing bioethical and policy problems in whichmembers of the group approach the issues withopenness, analyze the problem from a range of perspectives (usually ethical, legal, policy, scien-tic, and medical), articulate the arguments infavor of alternative positions, and work towardagreement. During the course of a two-day meet-ing with our Working Group, this process wasused and aided substantially by our prior anal-ysis of the literature and synthesis of candidatecriteria.

    Mathematical schemes designate a functional ag-gregation rule that accepts inputs from each expertand returns an arbitrated consensus (Winkler, 1968,1986; Genest & Zidek, 1986). Expert elicitation hasbeen typically used to estimate uncertain quantities(Morgan & Henrion, 1990b). There is not one bestway to conduct an expert elicitation; however, at-tributes of good protocols include exibility in ap-proach, introduction of the expert to the generaltask of elicitation, focus on the subject matter to be judged, and good denition of the quantity (or inthis case, the oversight criteria) that is to be elicited(Morgan & Henrion, 1990b). For our quantitativeapproach, we used a version of expert elicitationwith the goal of gaining empirical information aboutwhat criteria are important for oversight assessment.We followed these principles by remaining exiblein incorporating feedback from the Working Groupmembers up to the elicitation; spending a day priorto the elicitation to give background on the subjectmatter (e.g., reviewing the six historical case studiesof oversight and emerging issues in nanotechnology

    oversight); providing a primer on expert elicitationbefore the exercise; and dening each criterion withnot only a description of what it is, but also an exam-ple interpretation of that criterion and guiding ques-tion to help with the ranking of it (Appendix A).

    Behaviorally derived agreements often sufferfrom problems of personality and group dynam-ics. Mathematical approaches avoid these problemsbut introduce their own set, as numerically dictatedcompromises may be universally unsatisfactory. We

    chose to use both types of approaches to strengthenthe quality of the input from the Working Group.The behavioral approach was used to make adjust-ments to criteria, add or reword criteria, and gleangeneral principles of good oversight. The mathemat-

    ical approach involved a quantitative expert elicita-tion process whereby the Working Group memberswere asked to assign values, or probabilities, indicat-ing how important each criterion was for the evalua-tion of oversight models (Appendix A).

    For the elicitation, we asked each member to as-sess the importance of each criterion for oversightassessment. The question of How important is it toconsider this criterion in our oversight case studies?was posed. The members were asked to rank the im-portance of each criterion for oversight assessment,based on their experience and knowledge, on a scalefrom 0 to 100 with the option of referring to qualita-tive descriptions of different probability levels. Theselevels included: Certain (100); Near Certain (8099);Probable, Likely, We Believe (6080); Even Chance(4060); Less than an Even Chance (2040); Improb-ably, Probably Not, Unlikely, Near Impossibility (120); Impossible (0). Twelve members of the WorkingGroup participated in the exercise. STATA and Ex-cel software were used to analyze the results from theelicitation. A subsequent data report included sum-maries of responses as histograms as well as meansand median values and standard deviations for eachcriterion (Table II).

    Following the elicitation exercise, we used boththe quantitative results from it and behavioral con-sensus approaches with the authors and WorkingGroup to derive a streamlined set of key criteria foroversight assessment. Project staff and the WorkingGroup had initially agreed upon a target number of approximately 20 criteria for evaluations of the sixcase studies. The Working Group believed that thisnumber would reduce the list of criteria to a manage-able level for analysis while retaining a good degreeof breadth and coverage. Thus, we chose a cut-off score from the expert elicitation that would reduce

    the number of criteria to approximately 20. We se-lected criteria for which over eight of the members(> 70%) gave a score of at least 70 (out of 100). Thisdropped 42 criteria from the list, with 24 remaining(Table II).

    Results of the elicitation indicated that me-chanics of oversight were important to the Work-ing Group: compliance and enforcement, incentives,institutional structure, exibility, and capacity re-mained on the list of 24 criteria. Additionally, public

  • 8/14/2019 Frewer Media Genetically

    14/23

    1210 Kuzma et al .

    Table II. Criteria Analysis: Mean and Median Values andStandard Deviation for Each Criterion after Expert Ranking

    Development Mean Median SD

    Development d1 Impetus 80 83 12d2 Clarity of technological subject

    matter76 80 18

    d3 Legal grounding 69 70 16d4 Federal authority 70 77 19d5 Industry authority 65 75 26d6 Loci 62 60 27d7 Stakeholder input 81 88 16d8 Breadth of input 71 67 16d9 Opportunity for value

    discussions59 57 17

    d10 Transparency 80 85 18d11 Financial resources 69 75 20d12 Personnel education and training 66 73 23d13 Empirical basis 69 78 23

    Attributesa1 Legal grounding 74 78 17a2 Data requirement 82 84 10a3 Treatment of uncertainty 81 72 16a4 Stringency of system 78 83 15a5 Empirical basis 82 84 11a6 Compliance and enforcement 83 90 13a7 Incentives 73 84 21a8 Treatment of intellectual

    property and proprietaryinformation

    74 80 22

    a9 Institutional structure 62 68 23a10 Feedback loop 69 75 22a11 Formal assessment 63 70 24a12 Postmarket monitoring 76 82 19

    a13 Industry navigation 71 78 22a14 Actors involved 70 75 17a15 Flexibility 76 81 15a16 Capacity 79 81 13a17 Relationship among actors 63 65 24a18 Stakeholder input 70 77 23a19 Breadth of input 59 60 28a20 Opportunities for value discussion 53 50 23a21 Consideration of fairness 74 78 17a22 Transparency 82 88 17a23 Conict of interest 86 90 9a24 Conict of views 70 72 20a25 Economic costs and benets

    considered66 74 26

    a26 Accountability and liability 72 72 13a27 Education of decisionmakers,

    stakeholders65 70 21

    a28 Informed consent 82 88 14a29 International harmonization 66 68 23Evolutione1 Extent of change in attributes 59 77 30e2 Dist inguishable periods of change 48 50 30e3 Extent of change in attributes 58 62 21e4 Change in stakeholder satisfaction 60 62 22e5 Public condence 68 70 15

    (Continued )

    Table II. Continued.

    Mean Median SD

    Outcomeso1 Product safety 71 78 26o2 Time and costs for market

    approval69 75 25

    o3 Recalls 57 62 28o4 Stakeholder satisfaction 70 70 15o5 Public condence 76 80 18o6 Effects on social groups 57 60 18o7 Social, ethical, and cultural effects 63 60 21o8 Research impacts 86 90 13o9 Innovation 74 85 28o10 Health 85 88 12o11 Distributional health impacts 80 82 14o12 Environmental impacts 82 84 15o13 Nonindustry economic impacts 75 78 18o14 Effects on big corporations 56 62 28o15 Effects on small- to medium-sized

    enterprises

    58 68 29

    o16 Economic development 61 68 25o17 Global competitiveness for the

    United States64 65 20

    o18 Distribut ional economic impacts 61 62 20o19 Proposals for change 72 70 16

    Note : Gray boxes refer to criteria that were eliminated accordingto consensus cut-off scores: over eight experts ( > 70%) rating thecriterion as 70 or higher.

    condence in oversight was rated highly by theWorking Group as an important outcome of over-sight systems. However, criteria associated with eco-nomic impacts on industry ranked lower than we ex-pected from our expert and stakeholder group, whichcontained members from corporations and academicresearchers who work with industry to develop newtechnological products or applications. While the lit-erature reects an emphasis on the need for over-sight systems to reduce burdens on developers of products or applications (e.g., OTA, 1995; IRGC,2006), this group rated economic outcomes of over-sight lower than most other criteria (Table II). Thisresult could reect different industry viewpoints inour Working Group with respect to the community at

    large.To derive a nal list of criteria, we used a be-havioral consensus approach to combine the quan-titative elicitation results with our knowledge of theliterature and qualitative Working Group input. Weexamined the quantitative results for each criterioncarefully. Recognizing the imperfections of expertelicitation and mathematical approaches to consen-sus, we reinstated and combined a few criteria, andrevised the description of several based on feedback

  • 8/14/2019 Frewer Media Genetically

    15/23

    An Integrated Approach to Oversight Assessment 1211

    Table III. Final Set of Criteria

    Description and Guiding Question(s)

    Development: 7 Criteria (D)1. Impetus Historical context and driving forces behind the system of oversight or the reasons for developing

    the basic framework for the oversight system. The guiding question here is: What were thedriving forces? Examples could include intense public or legal pressure, key technologicaldevelopments, or in response to an adverse event (reactive) or emerging concerns about thepotential risks and benets prior to legal or public pressure, technological developments, or anyrelease or adverse event (proactive).

    2. Clarity of technologicalsubject matter

    Clarity or denition of the technologies, processes, and products to be overseen. The guidingquestion here is: Are the technologies, processes, and products to be overseen well-dened?Examples could include that the technologies, processes, and products as the subject of oversightare unclear or ill dened (not clear) or that they are well dened and it is clear what falls intooversight (clear).

    3. Legal grounding Basis for development and the clarity of the statutes or rules for implementing the newly developedframework and achieving its goals. The guiding questions are: How explicit are the statutes orrules on which the oversight framework is based? Is it clear that that the decisionmakers in theframework have legal authority for the actions they proposed? Is there grounding to existinglaws? Examples could be that there is considerable ambiguity and room for interpretation in

    executing the policies in the oversight framework (weak) or that there is little ambiguity aboutwhat the agencies can or cannot do (strong).4. Public input Inputs shaping the creation of the oversight system, and the extent of opportunities for engaged

    stakeholders including nongovernmental organizations, trade associations, academics, industry,and other affected groups, to provide input into the development of the initial framework. Thisincludes input on both scientic questions, as well as values questions (social, cultural, andethical). The guiding question is: Was there a process or opportunities for stakeholders tocontribute to discussions or decisions about the basis of the system, how it operates, or how it isstructured? Examples could be that the government or overseers (in the case of a voluntarysystem) made decisions based largely in the absence of stakeholder input (minimal) or that theoverseers made decisions based on a formal process above and beyond Federal Register notices forsoliciting input from stakeholders (signicant).

    5. Transparency Extent to which interested parties could obtain information about decisions during the developmentof the framework. The guiding question is: Were options that the agencies or otherdecision-making bodies were considering known to the public? Were studies about the pros and

    cons of these options available? Examples include that decisionsmakers laid out options andstudies that compared and contrasted them to the public prior to completing the framework (high)or the framework was published as a draft in the Federal Register , but the process for arriving atthe framework remained unknown to interested and affected parties (low).

    6. Financial resources Funding and resources and the amount of money allocated to the development of the oversight. Theguiding question is: How sufcient were the funds provided to the developers of theframework? Examples include that no money was set aside for oversight development (not at all)or ample funds were available for oversight development (sufcient).

    7. Empirical basis Empirical basis for development of the oversight system, including the amount and quality of evidence (scientic, risk, benet, or social impact studies) used. The guiding question is: To whatextent was scientic or other objective evidence used in designing the review or oversight processcentral to the framework? Examples include that there was a body of evidence used to assess theimportant features of data submissions, clinical studies, etc., that would be required in the generalframework (strong basis) or that during the development of the framework there was little to noinformation available on the nature or extent of the risks and benets, or other impacts of

    products or processes, in that qualitative speculation or predictions were used to generate theframework for oversight (weak basis).

    Attributes: 15 Criteria (A)8. Legal basis Legal and policy structure, that is, the clarity of the statutes or rules for implementing the specic

    decisions for processes, trials, research, or products within the oversight framework and achievingits goals. The guiding questions are: How explicit are the statutes or rules on which specicdecisions within the oversight framework are based? Is it clear that the decisionmakers in theframework have legal authority for the actions they propose? Examples include that there is littleambiguity about what the agencies can or cannot do in the context of a specic application orproduct (strong) or there is considerable ambiguity and room for interpretation in executingspecic decisions (weak).

    (Continued )

  • 8/14/2019 Frewer Media Genetically

    16/23

    1212 Kuzma et al .

    Table III. Continued.

    Description and Guiding Question(s)

    9. Data requirements andstringency

    Extent to which empirical studies are submitted prior to market approval, release, or clinical trials,and whether there is adequate legal authority to require data. The guiding questions are: How

    comprehensive are the safety and other studies required for submittal to authorities? If the systemis voluntary, how comprehensive are data generated and are they available for review prior todecisions about release or approval? How much regulatory authority is there for requesting newdata? Examples include that a letter is submitted that describes the composition of the productand there is little authority to request more data and assure compliance (weak) or that a battery of safety studies that extensively address environmental and human health risks are required andbacked up with adequate regulatory authority to assure compliance (strong).

    10. Postmarket monitoring Systematic monitoring for adverse or benecial events after the product is released or trials begin.The guiding question is: Is there a science-based and systematic process for detecting risks andbenets after commercial release, or eld or clinical trials? Examples include that once theproduct is released or trials begin, there is no monitoring for adverse events except anecdotally(little) or that there is an extensive system for reporting potential adverse events andconsolidating and evaluating them after a trials begin or product is released (extensive).

    11. Treatment of uncertainty Reporting ranges of possible values or studies in data that are submitted, whether subpopulationsare considered, acknowledgment of areas for which little scientic information is available, and

    recognition that the hazards may not be well categorized. The guiding question is: Is uncertaintyaccounted for qualitatively or quantitatively in data and study submissions? Examples includethat the risk analyses on which decisions are based use uncertainty modeling, account forsubpopulations, and qualitatively describe what is unknown (extensive) or point estimates areused based on population averages and narratives of sources of uncertainty are omitted (limited).

    12. Empirical basis Amount and quality of evidence (scientic, risk, benet) used for particular decisions. The guidingquestion is: To what extent was scientic or other objective evidence used in making decisionsabout specic products, processes, or trials? Examples include that high-quality, extensiveevidence on safety is required for product submissions, clinical studies, or eld trials (strong basis)or low-quality minimal evidence is required for making decisions (weak basis).

    13. Compliance andenforcement

    Programs and procedures in place to ensure compliance with the oversight process, and in the caseswhere there is a lack of compliance, consequences and corrections will result. The guidingquestion is: To what extent does the system ensure compliance with legal and other requirementsand can prosecute or penalize noncompliance? Examples include that there is little complianceor enforcement of requirements (weak) or there is much compliance and enforcement with

    requirements (strong).14. Incentives Incentives, nancial or otherwise, for compliance with system requirements. The guiding question is:Are the stakeholders in the system encouraged to abide by the requirements of the system?Examples include that there is no incentive structure for compliance (few) or there are manyincentives for compliance in the oversight system, beyond product or trial approval (many).

    15. Treatment of intellectualproperty and proprietaryinformation

    Treatment of intellectual property and condential business information. The guiding questionsinclude: How does condential information get treated in applications for approval? How doesintellectual property factor in? Examples include that decisionmakers share with the publicbusiness information and intellectual property is dealt with in an adequate way (high) or businessinformation is considered condential and not shared with the public and intellectual property isnot dealt with in an adequate way (low).

    16. Institutional structure Type of structure of the framework with regard to the number and/or complexity of the actorsinvolved, most notably federal agencies. The guiding question is: How many agencies or entitieswith legal authority are involved in the process of decision making within the framework?Examples include that there is a single authority with a simple and concentrated procedure

    (simple) or that there are multiple authorities, with a complexity of overlap or potential gaps(complex).

    17. Flexibility Ability for the framework to be exible in unique or urgent situations or when new information isobtained. Guiding questions are: Can products or trials undergo expedited review whenappropriate? Can products be withdrawn or trials easily stopped when information on potentialrisks is presented? Examples include that the system is rigid in the sense that there is only oneoption or path and it is difcult to change this pattern with new information (low) or the systemprovides for numerous ways to account for unique and emerging situations (high).

    (Continued )

  • 8/14/2019 Frewer Media Genetically

    17/23

    An Integrated Approach to Oversight Assessment 1213

    Table III. Continued.

    Description and Guiding Question(s)

    18. Capacity Resources of system, whether expertise, personnel, or nancial, to appropriately handle decisions. Isthe system well-prepared and equipped to deal with the approvals of trials, products, or

    processes? The agency staff are stretched thin and do not have time to do a good job with specicdecisions (inadequate). Agency staff are provided with resources, expertise, and time to giveproper and high quality attention to process (adequate).

    19. Public input Extent of opportunities for engaged stakeholders (nongovernmental organizations, tradeassociations, academics, industry, citizen groups, and other affected groups) to provide input intospecic or categories of decisions before they are made or during the process. The guidingquestion is: Is there a process or opportunities for stakeholders to contribute to discussions ordecisions about whether certain products, processes, or trials should be approved? Examplesinclude that the government or overseers (in the case of a voluntary system) make decisionslargely in the absence of stakeholders (minimal) or that the overseers make decisions based on aformal process, above and beyond notice of and comments on rule-making, for soliciting inputfrom stakeholders (signicant).

    20. Transparency Extent to which interested parties can obtain information about decisions during particular decisionsthat are being made within the oversight framework. The guiding questions are: Are options thatagencies or other decision-making bodies are considering known to the public? Are studies about

    the pros and cons of these options available? Is the process for how decisions are made clearlyarticulated to interested parties? Examples include that decisionmakers divulge the processesand authorities for review, options for, and studies about particular products or events as they arebeing considered and it is easy for citizens to track the process for and basis of decisions (high) ordecisions are published in the Federal Register , but it is difcult to gure out how, when, and bywhat criteria products, processes, or trials are reviewed (low).

    21. Conicts of interest Ability of the system to ensure that conicts of interest do not affect judgment. Guiding questionsare: Do independent experts conduct or review safety studies? Are conicts of interest disclosedroutinely? Examples include that there is no disclosure of conicts of interest and industrylargely conducts studies on its own products without external review, except by agency staff,(prominent) or every possible effort is made to avoid or disclose conicts of interest (avoided).

    22. Informed consent Stakeholders, patients, research participants, or the publics ability to know, understand, andchoose their exposure or the level of risk they accept. The guiding question is: To what extentdoes the system supply the amount and type of information so that people can make informeddecisions about what they will accept? Examples include that the public has little information

    about whether it is exposed, consuming the product, or subject to certain risks (little) or the publicis meaningfully informed about its exposure and the risks (extensive).Evolution: 1 Criterion (E)23. Extent of change in attributes Extent of change to the system over time. The guiding question is: To what extent has the system

    changed over time? Examples include that there were no change at all (none) or that there weresignicant structural changes (extensive). Change can indicate appropriate evolution of the systembased on new information or in response to adverse events.

    Outcomes: 5 Criteria (O)24. Public condence Public condence in the system, including views about product or trial safety and trust in actors.

    Guiding question is: What do diverse citizens and stakeholders think about the system, includingdisadvantaged, special, or susceptible populations? Examples include that there is widespreadfear and mistrust among the public (low) or that there is a general feeling that the oversightsystems and decisionmakers are doing a good job serving individual and multiple interests andsociety at large (high).

    25. Research & innovation Impacts on science, research, and innovation and whether the oversight system encourages research

    and innovation. The guiding question is: Has the system led to more research and innovation inthe eld or stied it? Examples include that the oversight system does not stie research andinnovation, and in fact increases it (positive) or the oversight system sties research andinnovation in many ways, perhaps due to time delays or cost of approvals (negative).

    (Continued )

  • 8/14/2019 Frewer Media Genetically

    18/23

    1214 Kuzma et al .

    Table III. Continued.

    Description and Guiding Question(s)

    26. Health and safety Health impacts and whether oversight of the products, processes, or trials leads to impacts on global,national, or local health and safety. The guiding question is: Does the oversight system impact

    health and safety in positive ways? Examples include that the oversight of products or processesis leading to negative health and safety impacts either through delays in approvals (e.g., life-savingdrugs) or through approvals of unsafe products (negative) or that the oversight of products,processes, or trials is leading to positive health impacts and increased safety (positive).

    27. Distributional health impacts How the health risks and benets resulting from the system are distributed. Guiding questions are:Are the health impacts equitably distributed? Is there an inequitable impact on specic social ordisadvantaged groups? Examples include that health impacts are not justly and equitablydistributed (inequitable) or health impacts are justly and equitably distributed (equitable).

    28. Environmental impacts The oversight of the products or processes leading to impact on the environment. The guidingquestion is: Does the oversight system impact the environment in positive ways? Examplesinclude that the oversight system has resulted in negative impacts on the environment (negative)or that there have been benecial impacts on the environment from oversight (positive).

    Note : This nal set of 28 criteria is being used to evaluate the historical oversight models.

    from the Working Group following the ranking ex-ercise. Five criteria that did not make the consen-sus cutoff (70% of experts over 70), but that we feltwere important and that had relatively high meansand medians (Table II) were fully reinstated legal grounding (d3), institutional structure (a9), postmar-ket monitoring (a12), stakeholder input (a18), and ex-tent of change (e1) (Table II). Impacts on innovation(o9) did not meet the consensus (70% over 70) cutoff,but it also had a relatively high mean and median (74and 85 respectively) and has been viewed in the liter-ature as an important outcome affected by oversight(e.g., OTA, 1995; Jaffe & Palmer, 1997). Therefore,it was reinstated and combined with impacts on re- search (o8) to address the overlap between the two.We also merged data requirements (a2) with strin- gency of the system (a4) to address Working Groupinput about the similarity between these two.

    Twenty-eight criteria remained in our nal set(Table III). Seven development criteria were re-tained that apply to the formal process of developinglaws, rules, standards, guidance documents, pro-grams, and policies that relate to the overall pro-cess for considering individual products, processes,

    or chemical trials: impetus, clarity of technologicalsubject matter, legal grounding, public input, trans-parency, nancial resources, and empirical basis. Fif-teen attributes criteria were retained that apply tothe process, whether formal or informal, of mak-ing decisions about specic products, subcategoriesof products, clinical trials, or other ways in whichthe framework is implemented: legal basis (formerlylegal grounding), data requirements and stringency,postmarket monitoring, treatment of uncertainty,

    empirical basis, compliance and enforcement, incen-tives, treatment of intellectual property and propri-etary information, institutional structure, exibility,capacity, public input, transparency, conicts of in-terest, and informed consent. One evolution crite-rion was retained to capture how the system haschanged over time and why: extent of change in at-tributes. Five outcome criteria were retained thatapply to the assessment of the impacts of decisionsstemming from the oversight framework: public con-dence, research and innovation, health and safety,distributional health impacts, and environmentalimpacts.

    5. SYSTEMS APPROACH FORRELATIONSHIPS AMONG CRITERIA

    Oversight systems are complex, and relation-ships among their attributes, outcomes, and how theydevelop and change are intricate, dynamic, and in-volve feedback. As such, hypotheses about what cri-teria are important for good oversight will be for-mulated and tested across historical models usinga systems approach (Fig. 2) and the nal set of

    criteria (Table III). Systems approaches are usefulin cases where mental models (peoples understand-ings of systems) are crucial for analysis given highdegrees of complexity, limited empirical information,and multiple types of parameters (Forrester, 1993). Ithas been suggested that effective methods for learn-ing about complex, dynamic systems include elicita-tion of participants in the system for their percep-tions, creation of maps of the feedback structure of a system from those perceptions, and stronger group

  • 8/14/2019 Frewer Media Genetically

    19/23

    An Integrated Approach to Oversight Assessment 1215

    Fig. 2. Systems approach: types of and relationships among cri-teria. Criteria were placed into categories of development, at-tributes, or outcomes of oversight systems, as well as how systemschange over time. Relationships among criteria will be explored infuture work through the cross-comparisons of historical oversightsystems. A systems model with complex interactions among crite-ria and feedback is depicted. Solid arrows indicate relationships inwhich outcome criteria are the dependent variables and used forevaluating oversight systems. Dotted arrows indicate relationships

    between other categories of criteria, which may include indepen-dent or dependent variables and evaluative or descriptive crite-ria. Striped arrows indicate feedback from outcomes to features of oversight systems, and in these cases, outcomes impact dependentvariables in other categories of criteria.

    processes (Sterman, 1994). We are employing thesestrategies through our work to better understandoversight systems for emerging technologies (Fig. 1).

    However, with our efforts to avoid oversim-plifying oversight systems into linear models, westruggled with whether to place our criteria into

    categories of evaluative versus descriptive, orindependent versus dependent variables at theoutset of our work. Initially, we will consider the out-comes that most people would agree upon as resultsof good oversight as key dependent variables andevaluative criteria (e.g., the ve remaining outcomecriteria of public condence, positive and justly dis-tributed health and environmental impacts, and in-creased research and innovation). A central ques-tion of our approach to assessing oversight systems

    is whether criteria in the attributes, evolution, anddevelopment categories (initially considered as in-dependent variables) positively or negatively impactthose key outcome criteria (initially the dependentvariables) (Fig. 2, solid arrows). For example, trans-

    parency in development or operation of oversightsystems (Table III, D5 or A20) is thought to promotepublic condence (Table III, O24). In this case, trans-parency would be considered the independent or de-scriptive variable and public condence the depen-dent or evaluative one.

    However, other relationships among criteria willbe explored. Several attributes and development cri-teria are normatively considered good features of oversight, and these can be used on their own to judge an oversight system. Transparency is thoughtto be a good feature of oversight (D5, A20) inthat it promotes ethical principles of autonomy andrights to know (Beauchamp & Walters, 1999). Re-garded this way, transparency is an evaluative andindependent criterion. Yet, other criteria in devel-opment or attributes categories, such as institutionalstructure (A16), can impact transparency, makingtransparency a dependent and evaluative variable(Fig. 2, dotted arrows). Furthermore, with feedback,transparency could become a dependent and eval-uative variable based upon an outcome criterion(Fig. 2, striped arrows). Therefore, transparency canbe placed into multiple categories depending on therelationship being explored.

    Additionally, some criteria that seem purely de-scriptive at this point might turn out to be evaluativeafter historical cross-comparisons of oversight mod-els (Fig. 1, future work). For example, institutionalstructure (A16) seems to be a description of an over-sight system, and there currently is not sufcient evi-dence in the literature to determine what type of in-stitutional structure is best for oversight of emergingtechnologies. However, this criterion might turn outto be correlated with positive outcomes or other eval-uative criteria, such as transparency, after our cross-comparisons. If so, a hypothesis about institutional

    structure and its contributions to good oversight canbe generated.As a result of these complexities and in consulta-

    tion with the Working Group, we chose not to cate-gorize our initial or nal criteria with more resolutionthan the four categories of development, attributes,evolution, and outcomes at this point. There is prece-dent in the literature for blending multiple types of criteria in analysis of decisions (Morgan & Henrion,1990a, p. 51). However, our methodology is unique

  • 8/14/2019 Frewer Media Genetically

    20/23

    1216 Kuzma et al .

    in the application of this approach to oversight sys-tems and consideration of complexities and feedbackin oversight.

    6. CONCLUSIONS AND FUTURE WORK

    We have developed a broad set of criteria todescribe and assess oversight systems for emergingtechnologies and theirapplications. We derived thesecriteria using multidisciplinary methods with care-ful attention to the multidimensional nature of over-sight. As discussed, the criteria are both descrip-tive and evaluative, addressing principles and fea-tures of the system, the evolution and adaptability of the system over time, and outcomes of the system.Our work incorporates a diversity of perspectiveson oversight and combines quantitative and qualita-tive methods. From qualitative analysis of the multi-disciplinary literature on oversight, we incorporatedwhat many groups, including experts, stakeholders,and citizens, believe to be important for good over-sight. Through the use of quantitative elicitation andconsensus methods with our Working Group, wehave directly included what those familiar with over-sight systems believe to be important for oversightassessment.

    The resulting criteria reect this broad consid-eration of perspectives and literature. In the nalset, criteria range in subject matter from the impor-tance of sound science in oversight to the extentof opportunities for public input into the design andexecution of oversight. The current outcomes crite-ria (Table III) are heavily weighted toward healthand environmental impacts, which reects the impor-tance that multiple experts and stakeholders place onthe ethical principle of maximizing benets and mini-mizing harm (Beauchamp & Walters, 1999). Impactson research and innovation are also included, andthese, in turn, are believed to have wider economicimpacts on industry and society (NRC, 2006).

    Our work is based on the idea that the designand implementation of oversight for nanotechnology

    should be schooled by the past successes and fail-ures of oversight systems for related technologies. Inour future work aimed to derive lessons for the over-sight of nanotechnology, quantitative expert elicita-tion and application of the nal criteria will continueto complement other prongs of the IOA approach,which include literature reviews about the perfor-mances of historical systems, assessment of publicopinion about oversight systems from the literature,semi-structured interviews with stakeholders and ex-

    perts to evaluate oversight systems, and behavioralconsensus methods to discuss and debate attributesand outcomes of systems (Fig. 1). We are now us-ing IOA to actively analyze and evaluate six histor-ical case studies that are related to the application of

    nanotechnology to biological systemsgene therapy,genetically engineered organisms in the food supply,human drugs, medical devices, chemicals in the en-vironment, and chemicals in the workplace. Throughcross-comparisons of these historical oversight cases,hypotheses about what oversight features affect cer-tain outcomes will be generated and tested in orderto derive principles and lessons for oversight of re-lated nanotechnology applications.

    We propose that comparisons across case studiesusing a consistent set of criteria will result in defen-sible and evidence-supported lessons for future over-sight systems for nanotechnology products (Fig. 1).The nal set of criteria embedded within a broaderIOA approach will be used to compare relation-ships among the development, attributes, evolution,and outcomes of oversight systems across historicalcase studies. Then, several criteria will likely progressfrom their descriptive role to being useful indica-tors of the quality of oversight systems and predic-tors of positive outcomes that satisfy a majority of citizens and stakeholders. For example, we may ndthat outcomes such as improved human health or en-vironmental quality (outcome criteria in Table III)are consistently correlated with increased public in-put (attribute criteria in Table III) across the histori-cal case studies.

    In summary, IOA blends theory, methods, andideas from legal, bioethics, and public policy ap-proaches with the practical goals of providing guid-ance to policymakers, decisionmakers, researchers,industry, patients, research subjects, consumers, andthe public at large. Integrating multiple methods andcriteria for oversight assessment will appeal to a widerange of stakeholders bringing a range of perspec-tives to bear. As we begin to apply the criteria tohistorical models of oversight, we will also be able

    to assess the degree of agreement and polariza-tion of expert and stakeholder opinion on histori-cal oversight systems, which will be instructive fordiagnosing controversy and how it impacts featuresand outcomes of oversight.

    We expect that our multidisciplinary IOA ap-proach could be widely applicable to other emergingtechnologies, facilitating assessment of current reg-ulatory oversight systems, the identication of pos-sible changes to existing systems, and the design of

  • 8/14/2019 Frewer Media Genetically

    21/23

    An Integrated Approach to Oversight Assessment 1217

    new ones. We anticipate that this approach will bea valuable tool for analyzing multiple perspectives,features, outcomes, and tradeoffs of oversight sys-tems. Such an approach that incorporates the view-points of key disciplines and the perspectives of

    multiple stakeholders could help to ameliorate con-troversy and conict as new technologies emergeand oversight systems for them are considered anddeployed.

    ACKNOWLEDGMENTS

    This work was supported in part by National Sci-ence Foundation NIRT Grant SES-0608791 (Wolf,PI; Kokkoli, Kuzma, Paradise, Ramachandran, Co-PIs). Any opinions, ndings, and conclusions or rec-ommendations expressed in this article are thoseof the authors and do not necessarily reect theviews of the National Science Foundation. Theauthors would like to thank the Working Groupparticipants: Dan Burk, J.D., M.S.; Steve Ekker,Ph.D.; Susan Foote, J.D.; Robert Hall, J.D.; RobertHoerr, M.D., Ph.D.; Susanna Hornig Priest, Ph.D;Terrance Hurley, Ph.D.; Robbin Johnson; BradleyKarkkainen, J.D.; George Kimbrell, J.D.; AndrewMaynard, Ph.D.; Kristen Nelson, Ph.D.; DavidNorris, Ph.D.; David Y. H. Pui, Ph.D.; T. AndrewTaton, Ph.D.; and Elizabeth J. Wilson, Ph.D., as wellas collaborators; Efrosini Kokkoli, Ph.D.; Alison W.Tisdale; Rishi Gupta, M.S., J.D.; Pouya Najmaie,

    M.S.; Gail Mattey Diliberto, J.D.; Peter Kohlhepp;Jae Young Choi; and Joel Larson for their valu-able input on the project. Additional contributorsto renement of project methodology include DaveChittenden; Judy Crane, Ph.D.; Linda Hogle, Ph.D.;William D. Kay, Ph.D.; Maria Powell, Ph.D.; andMichael Tsapatsis, Ph.D. The authors would also liketo thank Audrey Boyle for her project management.

    APPENDIX A: ELICITATION SURVEYINSTRUMENT

    Please see the online appendix.

    REFERENCES

    Abraham, J. (2002). Regulatory science as culture: Contested two-dimensional values at the US FDA. Science as Culture , 11,309335.

    Beauchamp, T., & Walters, L. (1999). Ethical theory and bioethics.In T. Beauchamp & L. Walters (Eds.), Contemporary Issues inBioethics, 5th ed. (pp. 132). Belmont, CA: Wadsworth Pub-lishing Company.

    Belton, V., & Stewart, T. J. (2002). Multiple Criteria Decision Analysis: An Integrated Approach . Boston: Kulwer AcademicPublishers.

    Cash, D. W., Clark, W. C., Alcock, F., Dickson, N. M., Eckley, N.,Guston, D. H., et al . (2003). Knowledge systems for sustain-able development. PNAS , 100(14), 80868091.

    Cobb, M. D., & Macoubrie J. (2004). Public perceptionsabout nanotechnology: Risks, benets and trust. Journal of Nanoparticle Research , 6, 395405.

    Cole, D. H., & Grossman, P. Z. (1999). When is command andcontrol efcient? Institutions, technology, and the compara-tive efciency of alternative regulatory regimes for environ-mental protection. Wisconsin Law Review , 5, 887.

    Davies, C. (2006). Managing the Effects of Nanotechnology.Project on Emerging Nanotechnologies . Washington, DC:PEN 2.

    Davies, C. (2007). EPA and Nanotechnology: Oversight for the 21st Century. Project on Emerging Nanotechnologies . Washington,DC: PEN 2.

    Einsiedel, E. F., & Goldenberg, L. (2004). Dwarng the so-cial? Nanotechnology lessons from the biotechnology front.Bulletin of Science, Technology, and Society , 24(1), 2833.

    Emanuel, E., Wood, A., Fleishman, A., Bowen, A., Getz, K. A.,Grady, C., et al . (2004). Oversight of human participants re-search: Identifying problems to evaluate reform proposals. Annals of Internal Medicine , 141, 283292.

    Evans, J. S., Gray, G.M., Sielken, R. L., Smith, A. E., Valdez-Flores, C., & Graham, J. D. (1994). Use of probabilistic expert judgment in distributional analysis of carcinogenic potency.Risk Analysis , 20, 1536.

    Farrow, R. S., Wong, W., Ponce, R. A., Faustman, E. M., & Zerbe,R. O. (2001). Facilitating regulatory design and stakeholderparticipation: The FERET template with an application to theClean Air Act. In P. Fichbeck & R. S. Farrow (Eds.), Im- proving Regulation: Case in Environment, Health and Safety(Ch. 19). Washington, DC: Resources for the Future Press.

    Fischoff, B., Lichtenstein, S., Slovic, P., Derby, S. L., & Keeney, R.L. (1981). Acceptable Risk . Cambridge: Cambridge UniversityPress.

    Forrester, J. (1993). System dynamics and the lessons of 35 years.In K. B. D. Greene (Ed.), Systems-Based Approach to Policy-making . Norwell, MA: Kluwer Academic Publishers.

    Frewer, L. J., Howard, C., Hedderley, D., & Shepherd, R. (1996).What determines trust in information about food-relatedrisks? Underlying psychological constructs. Risk Analysis ,16(4), 473486.

    Frewer, L., Lassen J., Kettlitz B., Scholderer, J., Beekman, V., &Berdal, K. G. (2004). Societal aspects of genetically modiedfoods. Food and Chemical Toxicology , 42, 11811193.

    Genest, C., & Zidek, J. V. (1986). Combining probability distri-butions: A critique and an annotated bibliography. Statistical Science , 1, 114148.

    Greene, D. L. (1990). CAF E or price? An analysis of the effectsof federal fuel economy regulations and gasoline price on newcar MPG, 19781989. Energy Journal , 11, 3758.

    Hawkins, N. C., & Evans, J. S. (1989). Subjective estimation of toluene exposures: A calibration study of industrial hygien-ists. Applied Industrial Hygiene Journal , 4, 6168.

    International Risk Governance Council. (2006). Survey on Nan-otechnology Governance: Volume B. The Role of Industry .Retrieved August 6, 2007 from http://www.irgc.org/spip/IMG/projects/Survey on Nanotechnology Governance - Part BThe Role of Industry.pdf.

    Jaffe, A. B., & Palmer, K. (1997). Environmental regulation andinnovation: A panel data study. Review of Economics andStatistics , 79, 610619.

    Jasanoff, S. (1990). The Fifth Branch: Science Advisors as Policy-makers. Cambridge, MA: Harvard University Press.

  • 8/14/2019 Frewer Media Genetically

    22/23

  • 8/14/2019 Frewer Media Genetically

    23/23

    An Integrated Approach to Oversight Assessment 1219

    Walters, L. (2004). Human embryonic stem cell research: An in-tercultural perspective. Kennedy Institute of Ethics Journal ,14(1), 338.

    White House. (1993, amended 2007). Regulatory Planningand Review . (Executive Order #12,866.) Washington, DC:White House, Ofce of the Press Secretary. 30 Septem-ber. Available at http://govinfo.library.unt.edu/npr/library/direct/orders/2646.html. Amendments retrieved September6, 2007 from http://www.whitehouse.gov/news/releases/2007/01/20070