Trust in science

12
Trust in Science BERNARD BARBER TRUST is an essential constituent of all social relationships and all societies. One sense of trust refers to an expectation or prediction that an assigned or accepted task will be competently performed. We trust, in this sense, that a person who is acting in a particular role or a particular capacity will do so at a reasonably expected level of proficiency. This meaning of trust is important in all societies, but is perhaps especially so in contemporary societies where there is such a vast accumulation of knowledge and technical expertise based on that knowledge. Scientists very much expect that a scientist who has the qualifications adjudged necessary to be a scientist can be trusted in this sense. A second meaning of trust is the reposing of fiduciary obligations and responsibilities in an individual or on an individual. We trust that the person will fulfil his duty in certain situations and that he will place the obligations which are by tradition inherent in his role to his colleagues, clients, or the instititution of which they are all members, above his own immediate interest or anticipated advantage. As a trusted person in this sense, observing his fiduciary obligations, it is in his own moral interest to put his other interests second. This kind of trust is important in all societies, but it is especially important in contemporary society where greater complexity and specialisation, and the difficulty of precise and effective surveillance of performance, enlarge the need for a sense of responsibility for others and for the institution or society as a whole. The difficulties of effective surveillance are matched by the difficulties of specification of the activities which have to be performed in the role and just how they are to be performed. Scientists very much expect that a qualified scientist can be trusted in this sense too, to observe the moral norms of science, to fulfil his normative obligations to his immediate colleagues, to the broader scientific community, and to the public as well as to the particular institution in which he works. Trustfulness, trustworthiness, trust in both senses are indispensable to the growth of scientific knowledge. These two forms of trust are quite distinct from each other. This is certainly the case in science. To say of an individual scientist, for example, that "he is too clever by half" is perhaps to say that we trust his technical competence more than we trust his moral sense or his fiduciary responsibility. The relative ascendancy of each of the two forms of trustworthiness may vary from one social institution to another and from one time to another. Thus, market institutions in our society, although relying a great deal on both kinds of trust, still place more reliance on price mechanisms, on the market, to control behaviour than do scientific and medical institutions. The

Transcript of Trust in science

Trust in Science B E R N A R D B A R B E R

TRUST is an essential constituent of all social relationships and all societies. One sense of trust refers to an expectation or prediction that an assigned

or accepted task will be competently performed. We trust, in this sense, that a person who is acting in a particular role or a particular capacity will do so at a reasonably expected level of proficiency. This meaning of trust is important in all societies, but is perhaps especially so in contemporary societies where there is such a vast accumulation of knowledge and technical expertise based on that knowledge. Scientists very much expect that a scientist who has the qualifications adjudged necessary to be a scientist can be trusted in this sense.

A second meaning of trust is the reposing of fiduciary obligations and �9 responsibilities in an individual or on an individual. We trust that the person

will fulfil his duty in certain situations and that he will place the obligations which are by tradition inherent in his role to his colleagues, clients, or the instititution of which they are all members, above his own immediate interest or anticipated advantage. As a trusted person in this sense, observing his fiduciary obligations, it is in his own moral interest to put his other interests second. This kind of trust is important in all societies, but it is especially important in contemporary society where greater complexity and specialisation, and the difficulty of precise and effective surveillance of performance, enlarge the need for a sense of responsibility for others and for the institution or society as a whole. The difficulties of effective surveillance are matched by the difficulties of specification of the activities which have to be performed in the role and just how they are to be performed. Scientists very much expect that a qualified scientist can be trusted in this sense too, to observe the moral norms of science, to fulfil his normative obligations to his immediate colleagues, to the broader scientific community, and to the public as well as to the particular institution in which he works. Trustfulness, trustworthiness, trust in both senses are indispensable to the growth of scientific knowledge.

These two forms of trust are quite distinct from each other. This is certainly the case in science. To say of an individual scientist, for example, that "he is too clever by half" is perhaps to say that we trust his technical competence more than we trust his moral sense or his fiduciary responsibility.

The relative ascendancy of each of the two forms of trustworthiness may vary from one social institution to another and from one time to another. Thus, market institutions in our society, although relying a great deal on both kinds of trust, still place more reliance on price mechanisms, on the market , to control behaviour than do scientific and medical institutions. The

124 Bernard Barber

learned professions rely very much on trust of both kinds to ensure effective performance and unswerving adhesion to moral responsibility.

Even in the same institution, the weight accorded to each of the two forms of trust may vary over time. As recently as the late 1960s, medical scientists did not assign much weight to fiduciary responsibility for the human subjects of their experiments. There was little concern for obtaining informed consent from these subjects and for making sure that the benefits of the research for the subjects was greater than its costs. In the late 1960s, when about 350 medical scientists, all of whom used human subjects, were asked "What three characteristics do you most want to know about another research worker before entering into a collaborative relationship with him?", most of them replied that they looked for scientific competence; 86 per cent of the respondents mentioned "scientific ability", 45 per cent "motivation to work hard", and 32 per cent "intellectual honesty". Only 6 per cent mentioned "ethical concern for research subjects". 1

It is very likely that nowadays the responses would give more weight to a sense of moral responsibility towards their subjects. This is so because of the scandals that have revealed the unethical use of human subjects. Medical education now pays somewhat more attention than it used to to the moral trustworthiness of physicians towards subjects and patients. Finally, and perhaps most importantly, all scientists in the United States using human subjects must now have their research plans approved by institutional review boards, to determine whether these plans meet standards of fiduciary responsibility towards their human subjects. This approval was required by the National Institutes of Health in 1986, as a condition for financial support. 2

Another example of change in recent years is the greater salience of the fiduciary obligation of scientists to their immediate colleagues and to the scientific community as a whole. This might be a result of the cases of fraud in science that have occurred at major universities such as Harvard, Yale, Columbia, and at the Sloan-Kettering Institute.

Both kinds of trust have negative consequences--nor is trust alone sufficient to enforce compliance with technical and ethical standards. These may be either formal--ridicule, unhelpfulness or ostracism---or they may be formal--ethics committees, surveillance, insurance, or legislation and legal action.

Within the scientific community, there has been a strong tendency to rely on trust itself and on informal processes to reinforce it. However, as

x Barber, Bernard, Lally, John J., Makarushka, Julia Loughlin and Sullivan, Daniel, Research on Human Subjects: Problems o f Social Control in Medical Experimentation (New York: Russell Sage, 1973), pp. 125-126.

z For an early and still useful survey, see Katz, Jay (ed.), Experimentation With Human Subjects (New York: Russell Sage, 1972). For an account of the "Tuskegee scandal", see Jones, James H., Bad Blood (New York: Free Press 1981). On the ethical trustworthiness of social scientists, see Reynolds, Paul Davidson, Ethical Dilemmas and Social Science Research (San Francisco, California: Jossey Bass, 1979).

Trust in Science 125

untrustworthiness has seemed to increase and has become publicly known, scientists, grant-awarding bodies and the attentive public have proposed the use of more formal arrangements. In science, universal and prevailing trust is preferred. Informal controls operate but they are not admired. Formal arrangements are abhorred, and are seen as affronts to the valued solidarity and co-operation of scientists with each other. Stable trust expresses and maintains the shared values of the scientific communities. When trust breaks down, when it seems that the values of science are not shared, scientists experience poor morale, mutual suspicion, overt hostility and conflict. All this makes impossible the easy co-operation and trustworthiness in both senses that are essential to effective scientific work. Intense rivalry among scientists and their laboratories may result in distrust and its negative consequences. This is what seems to have occurred in the rival American and French laboratories working on acquired immune deficiency syndrome.

The substance and limits of both trusted competence and fiduciary trustworthiness are hard to define. There is some tendency, therefore, for a trusted scientist to think he is not being trusted enough and hence to overstate his claims to competence or fiduciary performance. Some scientists think that they must never be subjected to moral questioning or reproach. For example, American molecular biologists, anticipating at one point the possibility of harmful consequences of gene-splicing, met at Asilomar in California to promulgate a set of protective rules for work in their field. This was a manifestation of scientific trustworthiness. But when laymen in, for example, Ann Arbor or Cambridge, Massachusetts, protested against this action as an "usurpation" of their rights to participate in a decision about a matter so important to the public, leading molecular biology scientists regarded this as an attack on their fiduciary trustworthiness.

Since there is always some danger of untrustworthiness in one respect or another, there is a constant need for "rational distrust". Colleagues and workers in the same or related fields of scientific research might well think that distrust is the appropriate and rational response to certain claims of competence or fiduciary responsibility. Not every expression of distrust towards a particular scientist or a particular scientific assertion is, as some scientists have held, an indication of public ignorance or anti-scientific attitudes. All specialists, including scientists, must expect occasionally to be questioned or held to account with regard both to their competence and their fiduciary concern for the public interest and welfare. But scepticism about the trustworthiness of scientists might well be a result of passionate prejudice and ignorance or indifference to the values served by science. Like trustworthiness and untrustworthiness, the distrustfulness of the laity is also difficult to assess. Knowledge, honesty, discrimination and wisdom are required.

Fur thermore, no institutional provisions to guarantee trustworthiness are foolproof, nor, once established, do they go on working automatically.

126 Bernard Barber

Trustworthiness, like the integrity of individuals and efficacy of the scientific community, requires unceasing attention.

A Brie f Historical Note

Trust within science, and public trust in science, depend on the existence of certain technical and moral norms as standards for behaviour. The moral norms of the scientific community--such as organised scepticism, the sharing of knowledge, and distinterestedness--first explicitly discussed by Professor Robert Merton 3 have been denied; they have been allegd to be no more than "rationalising ideologies" used to justify "interests". These norms are integral to the effective working of the social institutions through which scientific work, is conducted. The technical norms--such as the standard of rationality, and testing by empirical evidence, preferably through controlled experiment--have long been acknowledged as constitutive of scientific procedures. 4

These two kinds of norms first emerged from embryonic prototypes into the requisites of an explicit role in the seventeenth century. 5 Scientific roles in that century were primarily recognised and supported through informal sets of relationships among "scientists". (The latter term came into existence only in the 1830s.) Although informal relationships are preferred by scientists for the conduct of their work, more formal associations were established to foster communication, to gain public recognition and legitimacy, and to obtain financial support. The Royal Society, and many subsequently established scientific academies and professional associations, and journals which published the work of scientists after proper assessment, became the formal complements of informal relationships among scientists in the maintenance of trust in the competence and the fiduciary responsibility of scientists.

The late Professor Joseph Ben-David stressed the importance of the strengthening of the old academies and the emergence of new ones at the end of the eighteenth and in the early nineteenth centuries. He attached so much importance to this indispensable development for the maintenance of trust in science that he called it "the eighteenth-century revolution in science". During the nineteenth and twentieth centuries, scientific academies transcended more and more their national boundaries, while also reinforcing the arrangements to the assurance of scientific trustworthiness. There has been no further "institutional revolution" to compare with that of

3 Merton, Robert K., The Sociology of Science (Chicago: University of Chicago Press, 1973), Ch. 13. See also Barber, Bernard, Science and the Social Order (Glencoe, Ill: Free Press, 1952).

4 See Brown, James Robert (ed.), Scientific Rationality: The Sociological Turn (Dordrecht: D. Riedel, 1984).

5 Ben-David, Joseph and Clark, Terry Nichols (eds), "Organization, Social Control, and Cognitive Change in Science", in Culture and Its Creators: Essays in Honor of Edward Shils (Chicago: University of Chicago Press, 1977), Ch. 11. See also Ben-David, Joseph, The Scientist's Role in Society (Englewood Cliffs, N.J.: Prentice-Hall, 1971; 2rid edn, Chicago: University of Chicago Press, 1984).

Trust in Science 127

the eighteenth century. The established and strengthened informal and formal social mechanisms of science provided the conditions of trust that have made substantive scientific discoveries possible. 6

Trust Within Science

Trust within science depends on the continuing and effective fulfilment of both the cognitive and the moral norms that govern the actions of scientists. Fulfilment of the moral norms is evidence of satisfactory fiduciary responsibility to the community of scientists, and it earns a reputation for trustworthiness in that respect. Failure or violation in either respect is regarded as evidence of untrustworthiness and it leads to informal sanctions by colleagues. In cases of severe breaches in the norms, however, more formal mechanisms of control and sanctions, such as dismissal or the withdrawal or refusal of financial support might be demanded.

Violations of both kinds occur in science .7 The scientific errors that occur as a result of technical incompetence have been roughly classified as " repu tab le" and "disreputable" errors. The "reputable errors" are those that occur sometimes and by all scientists, despite their having maintained high standards with regard to the general technical norms relevant to all scientific work, and to the somewhat more limited technical procedures that are necessary in the various and numerous specialties into which scientific work is divided. The "disreputable" result f rom gross neglect or deliberate violation of either general or more narrow technical norms. Reputab le errors are accepted as part of the game, the inevitable slips that happen in the careers of even outstanding scientists. They may occasion informal modes of control such as teasing, but they do not give rise to charges of grave untrustworthiness.

Disreputable errors arouse much stronger reactions and moral concern among scientific colleagues. Scientists express contempt and anger at such inexcusable incompetence. Disreputable errors waste the t ime and resources of those who are trying to replicate studies, or to conduct new research and analysis on the basis of such spurious reports. They put untrustworthy observations into the pool of observations on which scientists in any specialty depend. Such malefactors are despised if their errors cause a lot of trouble and are repeated. Errors which occur in connection with possibly important findings are especially censured. In a great deal of science, experiments once reported in publications are not repeated by others; they are taken on trust. This is especially true of routine and minor work. But

6 Kuhn, Thomas S., The Structure of Scientific Revolutions (Chicago: University of Chicago Press, 1962); and The Essential Tension (Chicago: University of Chicago Press, 1977); Lakatos, Imre and Musgrave, Alan (eds), Criticism and the Growth of Knowledge (Cambridge: Cambridge University Press, 1970).

7 Zuckerman, Harriet, "Deviant Behavior and Social Control in Science", in Sagarin, Edward (ed.), Deviance and Social Change (Beverly Hills, Calif.: Sage Publications, 1977); and "Norms and Deviant Behavior in Science", Science, Technology & Human Values IX, (Winter 1985), pp. 7-13.

128 Bernard Barber

where work seems important, scientific colleagues pay more attention are more likely to try to replicate the results, or use them in their own research. Errors connected with minor "discoveries" may never be found out because of little attention paid to such results. ~

While the indignant disapproval provoked by the untrustworthy incompetence of disreputable errors indicates that even the technical norms of science are supported by moral attitudes, much stronger moral condemnation comes down on those scientists who are untrustworthy with regard to the central moral norms of science such as organised scepticism, universalism, the sharing of knowledge, and disinterestedness. A d h o m i n e m attacks by scientists on their colleagues, plagiarism, secretiveness, and falsification of reports of observation, are types of moral delinquency that violate one or more of these central moral norms of science.

Plagiarism, for example, is a violation of the norm which requests that a scientist must be given credit for his discovery and his contribution to the shared body of knowledge in his field. 9 There is no property in scientific knowledge except the "intellectual property" that may be thought to exist in the acknowledgements and deference that colleagues give to the producers of knowledge in recognition of their achievements.

Secrecy, similarly, is a violation of the norm which requires the sharing of knowledge. The secretive scientist keeps from the common body of knowledge findings and techniques that are necessary for its further enlargement. Individual scientists or laboratories in rivalry with others working on the same problems incline towards being secretive. 1~ It is, of course, often difficult to specify just where secrecy in science exists. What looks to outsiders like secretive withholding of his results by a scientist may look to him like proper caution and concern to be certain about the validity of his results before disclosing them. He does not disclose until he is sure that he is right, and perhaps also, to ensure that he receives appropriate credit for his discovery.

Fraud in Science

Fraud, the deliberate falsification of observations in science, is one of the most odious actions a scientist can perform and results in the strongest condemnation for fiduciary untrustworthiness. The scientist is expected to place the truthfulness of a scientific account above all else; if he falsifies results to elevate his status, he is placing his own advantage over that of the

8 For a detailed account of a recent disreputable error in a potentially important discovery, the case of "polywater", see Zuckerman, H., "Deviant Behavior", in Sagarin, E. (ed.), op. cir., pp. 111-112.

9 Merton, R. K., The Sociology of Science, op. cit., Ch. 14. lo See Latour, Bruno and Woolgar, Steve, Laboratory Life: The Social Construction of

Scientific Facts (Beverly Hills, Calif.: Sage Publications, 1979).

Trus t in Sc ience 129

scientific community. The fraudulent scientist puts first his own self-interest in gaining recognition at any cost.

Fraud in science is a heterogenous matter. There may be outright fabrication of data, there may be manipulation of observations and the reports on the data to make them come out the way the scientist wants. There may be suppression of data which would falsify or call into question the scientist's findings, and there may be misappropriation of data and of credit for work done. Something like all these fraudulent behaviours were discussed about 150 years ago by Charles Babbage as "forging", " t r imming", and "cooking". 11 The present magnitude, incidence and distribution of fraud in science are not entirely clear. There are some case studies but nothing reliable is known about how widespread the phenomenon is.

A special session at the annual meeting of the American Association for the Advancement of Science in 1985 discussed this subject. Some of the participants thought that there had been a great increase of fraud in science, and that the increase was destroying the moral integrity of science.12 Most of the participants were from the biomedical sciences; it is from those fields that most of the recent cases of fraud have been reported.

The participants thought that competitiveness among biomedical scientists is the main cause of fraud in research. Dr Robert G. Petersdorf, dean of the School of Medicine at the University of California, San Diego, said: "I t is true that science in 1985 is too competitive, too big, too entrepreneurial , and bent too much on winning." Dr Hendrik Bendixen, dean of the College of Physicians and Surgeons at Columbia University, where two medical research workers were recently required by an investigating committee to retract nine published papers because of alleged manipulation of data, concurred; he said that " the high pressure environment" at the medical schools of major universities is responsible for the current cases of fraud. The gist of argument is that there is too much competition for scarce funds for research and to produce the expected volume of publications in order to gain promotion to permanent tenure.

Dr Petersdorf and other participants felt that this competition is especially harsh within and between large laboratories that are now so important in the biomedical as well as other sciences. Dr Patricia Woolf reported that:

fraud has been detected at our best universities, where research excellence is emphasized and where many professors do publish considerably more papers than the norm . . . . young people on the tenure track have been more frequently caught at fraud than older, established scholars . . . . many of the perpetrators of publicly disclosed scientific fraud had published many papers, especially in the time period immediately surrounding the fraud. 13

11 Babbage, Charles, Reflections on the Decline of Science in England and on Some of lts Causes (New York: Scholarly Books, 1976), pp. 177-182. (Published originally in 1830).

12 The New York Times, 30 May, 1985; Science, CCXXVIII (1985), pp. 1292-1294. 13 Ibid.

130 B e r n a r d B a r b e r

There is undoubtedly some truth in Dr Woolf's account. The cases of fraud which have received most attention have occurred at famous centres like the Sloan-Kettering Memorial Hospital TM and at Columbia, Harvard, and Yale Universities. In those institutions there is intense rivalry among members of the staff for funds and promotion. There is also a largeness of scale that makes it difficult if not impossible for the directors of those laboratories to supervise and watch over the work of their junior associates. The director of Summerlin's laboratory at Sloan-Kettering had allowed his name to appear as co-author on 341 papers in the five years preceding the discovery of Summerlin's fraud. The director of the laboratory at Harvard where Dr John Darsee had falsified data for research in cardiology had put his name on 171 papers in the previous five years. 15 The director at Yale, in the same period, had published as sole or co-author 201 papers. Under the conditions prevailing in these laboratories, the informal buttresses of trustworthiness could not operate and formal administrative controls were inapplicable or simply neglected. Effective supervision and close scrutiny by the director were neglected. There was too much trust but not enough trustworthiness.

The problems raised by violations of trust in large laboratories go beyond the relations between the director and his senior scientists. The greatly increased division of labour and specialisation in "big science" now means that not only must the director be intimate with the work of his senior scientists but they, in turn, must scrutinise the work of the junior scientists and even laboratory technicians. In the case of fraud at Columbia University, where the internal and external investigations discovered at least "disreputable error" and perhaps excessive determination to report favourable findings, it came to light that a major and confessed fault lay with one of the laboratory technicians. Under threat of suit from one of the senior scientists, who had been charged with fraud, a laboratory technician at Columbia wrote a letter acknowledging that "actions I took resulted in the biasing of the experimental data". 16 Laboratories of any size are nowadays filled with highly specialised and very busy scientists who often lack the time and expertise to assess the reliability of their colleagues' work. This changed structure of science requires more trustworthiness than in the past, when less differentiated division of labour prevailed.

There are no satisfactory estimates of the current magnitude of fraud in research. There is, of course, a tendency among those outraged by even a few cases of untrustworthiness to exaggerate its magnitude. The large amount of publicity given to a small number of cases of fraud may also magnify the problem among scientists themselves and among the public.

14 Zuckerman, H., "Deviant Behavior", in Sagarin, P. (ed.), op. cir., pp. 114-115. 15 See Chubin, Daryl E., "Misconduct in Research: An Issue of Science Policy and

Practice", Minerva XXIII (Summer 1985), pp. 175-202; "The Morality of Scientists", Reports and Documents, pp. 272-304; and "Misconduct in Research", ibid. (Autumn 1985), pp. 423- 432.

16 The New York Times, 13 July, 1985.

Trust in Science 131

Speaking for the National Institutes of Health, which at present supports most of biomedical research in the United States, the deputy director Dr William F. Raub, said that cases of fraud coming to the attention of his agency were relatively uncommon; there were about two cases a month of allegations of possible misconduct. This number, he said, is "a vanishingly small fraction" of the approximately 40,000 scientists who are supported by the National Institutes of Health at any one time. Of course, institutions receiving research funds from the National Institute of Health are not at present required to report instances of fraud. (The National Institutes of Health is about to propose a regulation requiring such reporting.) Still, Dr Raub concluded that he had no reason to think that a large number of fraud cases were not reported.

Whether there is really a crisis in trustworthiness in science at present because of a large increase in the amount of fraud, or whether there is only perceived to be one, in some respects makes little difference. In response to the present "crisis", there probably will be an intensification of informal arrangements for scrutiny in science, especially at major universities and their large laboratories, as well as the creation of formal arrangements, such as the requirement of reports of fraud on the part of the National Institutes of Health and the establishment of special committees for monitoring and investigating. No general committee to investigate fraud in science has been established by the National Academy of Science, but committees have been created at particular universities to investigate and adjudicate specific allegations of fraud at their own universities. Unless the situation becomes obviously worse, this is probably as far as arrangements for the maintenance and reinforcement of technical competence and fiduciary responsibility will go.

Trustworthiness and Untrustworthiness beyond Research

A great deal of the recent discussion of violations of trust within science has concentrated on what happens in laboratories. But trust and distrust in science extend beyond the walls of laboratories. When new work is proposed to grant-awarding bodies, when papers reporting discoveries are submitted to scientific journals, and when books are offered to publishers, all this proposed and finished work must be evaluated by judges, referees, reviewers acting singly or in panels; these are not infrequently the "peers" of the scientist or laboratory director whose work is being assessed. A great deal obviously depends on the trustworthiness of these assessors of scientific work; they must be both competent and fiduciarily responsible for the trustfulness of what is presented as scientific knowledge. No "crisis" of trust is currently perceived in this area of science; probably in the aggregate, trustworthiness in both senses prevails. Nevertheless, there are complaints about incompetence and self-interestedness on the part of assessors of all kinds. Tales are told about how a particular piece of work has been

132 Bernard Barber

incompetently reviewed or has been misused by some reviewer with a self-interested motive. These complaints, whatever their truthfulness, indicate the presence of distrust and a consequent sense of grievance and hostility. How seriously this should be taken is difficult to say.

Distrust is also manifest in the controversy that frequently recurs over the principle of anonymity in assessment. Some scientists hold that anonymity increases the possibility of incompetence and fiduciary irresponsibility; others, in direct disagreement, say that only anonymity makes it possible for the evaluator to be candid in his judgement of the quality of the work and to think first and only of the good of science. Given this disagreement, it is unlikely that all distrust of assessment of candidates, proposals and papers, which is so essential in science, can ever be wholly eliminated. Nevertheless, the efficiency and trustworthiness of assessments is a matter which is not simply taken for granted any longer. It is likely to be subject to controversy as well as to attempts at careful study.

Public Trust in Science

Public trust or distrust towards science are important because science depends for its support on public opinion and because science has such an enormous effect on society. Through all the multifarious benefits it has brought, science has increased public trust in its competence and also, but more ambivalently, in its fiduciary responsibility for the public welfare.

Great power is seldom absolutely trusted, and so it has been with science. There are several reasons why science, and more generally the practical professions which depend on scientific knowledge, have recently become objects of public distrust, some of it rational, some irrational. The first is that science now possesses great power to do harm as well as good. It is a power that gets credit for "the green revolution" in agriculture, but also gets blamed for the harmful consequences of the discovery of atomic energy, and the possible and imagined consequences of genetic engineering applied to human beings. Some empirical evidence on the public's perceptions of the power of science, and its consequent feelings o f trust and distrust, can be gleaned from the data collected through survey research during the last 20 years.

There are mixed feelings of trust and distrust towards all the powerful institutions in American society: government, labour, business, science, medicine, the military and religion. There is some lack of "confidence" in all of these institutions--but some institutions earn more "confidence" than others. Business, government and labour earn less than science, education and medicine. We may surmise that there are two reasons for this. First, the great power of private business, government and the labour unions is more visible to ordinary persons; they have learned to be more aware of it than

Trust in Science 133

they are of the power of science. 17 Second, science, medicine and education are perceived to be more trustworthy in terms of fiduciary responsibility for the public welfare. Again, they are not seen as absolutely trustworthy in this regard, but as more so than business, government or the labour unions. Despite its ambivalence towards science, the public feels it can trust it more than it trusts some other institutions.

A second and related reason why the trustworthiness of science is now a social problem is the greatly increased emphasis on the value of equality in power and status. Whether it is the young against the old, blacks against whites, students against teachers, patients against physicians, or the public against scientists, all those who feel themselves inferior either in opportunity or in the possession of goods and power express some distrust of both the competence and the sense of responsibility of persons of high status and much power.

Finally, because of its own increased education and resulting competence, the public engages in ever more testing of the actual trustworthiness of scientists and other professionals. Much to the horror of many older scientific and other experts, the public is now less passively deferential to them. When scientists make excessive claims to trustworthiness, the persons who claim to speak on behalf of the public, and who indeed do have some support from some part of the public, want to exercise more scrutiny over their actions. In public controversies over nuclear power plants or over gene-splicing, groups of interested laymen bring their own scientific experts to scrutinise and criticise the views of established experts. 18 In attempting to deal critically with the views and claims of scientists, the interested laity calls up scientists who are sympathetic with their own standpoint. It is evident that even the critical part of the public has not repudiated science so completely as to refuse the benefits of the support of " thei r" scientists.

Another source of public distrust of science flows from the claim that scientists make for the absolute autonomy of science. The resentment on the part of some politicians and members of the politically active public, including publicists, is of a piece with the egalitarian distrust of scientists and hence of science. 19

Other cases where scientists have been denounced for imposing their views on the public are the controversy over fluoridation of the public water

17 Lipset, Seymour Martin and Schneider, William, The Confidence Gap (New York: Free Press, 1983).

18 See the following by Nelkin, Dorothy: Science Textbook Controversies and the Politics of Equal Time (Cambridge, Mass.: MIT Press, 1977); (ed.), Controversy: Politics of Technical Decisions (Beverly Hills, Calif.: Sage Publications, 1979); "Scientific Knowledge, Public Policy, and Democracy: A Review Essay", Knowledge: Creation, Diffusion, Utilization, I (1979) pp. 106-122; and "Public Participation in Technological Decisions: Reality or Grand Illusion?", Technology Review (August-September) (1979), pp. 55-64.

19 Kevles, Daniel J., The Physicists: The History of a Scientific Community in Modern America (New York: Knopf, 1978). See also his In the Name of Eugenics: Genetics and the Uses of Human Heredity (New York: Knopf, 1984).

134 Bernard Barber

supplies and over the construction of more nuclear power plants. 2~ In the fluoridation controversy, public distrust came from political conservatives in the name of the value of individual liberty. In the case of nuclear power, distrust began among the political radicals in the name of peace, but has now spread more widely in the name of safety. In all these cases there is a charge of scientific arrogance and authoritarianism. In her study of the controversy over school textbooks in science, especially those in biology and anthropology, Professor Dorothy Nelkin says that critics of these textbooks charge science with "an authoritarian ideology" and they express "extraordinary resentment of 'scientific dogmatism,' of the 'arrogance' and 'absence of humility' among scientists". 21 The process of retaining and increasing the trust of the lay public for themselves and above all for science, in an egalitarian and democratic society is bound to be an endless one.

20 On the fluoridation controversy, see Mazur, Allan, "Disputes Between Experts", Minerva XI (April 1973), pp. 243-262; and "Opposition to Technological Innovation", Minerva XIII, (Spring 1975) pp. 58-81.

21 Nelkin, D., Science Textbook Controversies, op. cit., pp. 131,138.