g: pp ex ookskeyword4997-alasuutari4997-alasuutari-ch12 · 191; Alasuutari, 1995: 196–7),...

21
[13:00 6/11/2007 4997-alasuutari-ch12.tex] Paper: a4 Job No: 4997 Alasuutari: Social Research Methods (SAGE Handbook) Page: 193 193–213 12 Re-conceptualizing Generalization: Old Issues in a New Frame Giampietro Gobo INTRODUCTION Even though qualitative methods are now recognized in the methodological literature, they are still regarded with skepticism by some methodologists, mainly those with statistical training. One reason for this skep- ticism concerns whether qualitative research results can be generalized, which is doubted not only because they are derived from only a few cases, but also because even where a larger number is studied these are generally selected without observing the rigorous criteria of statistical sampling theory. In this regard, the methodology textbooks still distinguish samples into two types: probability samples (simple ran- dom, systematic, proportional stratified, non- proportional stratified, multistage, cluster, area, and their various combinations), and non-probability ones (haphazard or conve- nience, quota, purposive, of the emblematic case, snowball, telephone) 1 . With regards the latter is stated: the obvious disadvantage of nonprobability sam- pling is that, since the probability that a person will be chosen is not known, the investigator generally cannot claim that his or her sample is representative of the larger population. This greatly limits the investigator’s ability to generalize his or her findings beyond the specific sample studied (…) A nonprobability sample may prove perfectly adequate if the researcher has no desire to generalize his or her findings beyond the sample (Bailey, 1978: 92). This position again tends to relegate qualitative research to the marginal role of furnishing ancillary support for surveys, which is precisely as it was conceived by Barton and Lazarsfeld (1955) and the methodologists of their time. The aim of this study is to show that this methodological denigration of qualitative research is overly severe and unjustified,

Transcript of g: pp ex ookskeyword4997-alasuutari4997-alasuutari-ch12 · 191; Alasuutari, 1995: 196–7),...

Page 1: g: pp ex ookskeyword4997-alasuutari4997-alasuutari-ch12 · 191; Alasuutari, 1995: 196–7), ‘moderatum generalization’ (Williams, 2000; Payne and Williams, 2005), and others.

[13:00 6/11/2007 4997-alasuutari-ch12.tex] Paper: a4 Job No: 4997 Alasuutari: Social Research Methods (SAGE Handbook) Page: 193 193–213

12Re-conceptualizing

Generalization: Old Issuesin a New Frame

G i a m p i e t r o G o b o

INTRODUCTION

Even though qualitative methods are nowrecognized in the methodological literature,they are still regarded with skepticism bysome methodologists, mainly those withstatistical training. One reason for this skep-ticism concerns whether qualitative researchresults can be generalized, which is doubtednot only because they are derived fromonly a few cases, but also because evenwhere a larger number is studied theseare generally selected without observingthe rigorous criteria of statistical samplingtheory. In this regard, the methodologytextbooks still distinguish samples into twotypes: probability samples (simple ran-dom, systematic, proportional stratified, non-proportional stratified, multistage, cluster,area, and their various combinations), andnon-probability ones (haphazard or conve-nience, quota, purposive, of the emblematic

case, snowball, telephone)1. With regards thelatter is stated:

the obvious disadvantage of nonprobability sam-pling is that, since the probability that a personwill be chosen is not known, the investigatorgenerally cannot claim that his or her sampleis representative of the larger population. Thisgreatly limits the investigator’s ability to generalizehis or her findings beyond the specific samplestudied (…) A nonprobability sample may proveperfectly adequate if the researcher has no desireto generalize his or her findings beyond the sample(Bailey, 1978: 92).

This position again tends to relegatequalitative research to the marginal roleof furnishing ancillary support for surveys,which is precisely as it was conceivedby Barton and Lazarsfeld (1955) and themethodologists of their time.

The aim of this study is to show thatthis methodological denigration of qualitativeresearch is overly severe and unjustified,

Page 2: g: pp ex ookskeyword4997-alasuutari4997-alasuutari-ch12 · 191; Alasuutari, 1995: 196–7), ‘moderatum generalization’ (Williams, 2000; Payne and Williams, 2005), and others.

[13:00 6/11/2007 4997-alasuutari-ch12.tex] Paper: a4 Job No: 4997 Alasuutari: Social Research Methods (SAGE Handbook) Page: 194 193–213

194 THE SAGE HANDBOOK OF SOCIAL RESEARCH METHODS

for three reasons. First, because the use ofprobability samples and statistical inferencein social research often proves problem-atic. Second, because there are numerousdisciplines, in both the social and humansciences, whose theories are based exclusivelyon research conducted on only a few cases.Third, because, pace the methodologicalorthodoxy, a significant part of sociologicalknowledge, is idiographic. My intention istherefore not to criticize sampling theoryor its applications; rather, it is to remedya situation where statistical inference isdeemed the only acceptable method, andidiographic generalization as scientifically ill-founded. Finally qualitative researchers do notneed to throw away the baby generalizationwith the bathwater of probability sampling,because we can have generalizations withoutprobability.

THE PROBLEMATIC USE OFPROBABILITY SAMPLES IN SOCIALRESEARCH

Several authors (among them Goode and Hatt,1952; Chain, 1963; Galtung, 1967; Capecchi,1972) have stressed that the application ofstatistical sampling theory in sociologicalcontexts gives rise to various difficulties.This theory, in fact, requires the researcherto construct a probability sample (one, thatis, where each subject’s likelihood of beingselected is known and also every item hasan equal chance of being selected), and thecases must be selected in rigorously randommanner. But these two requirements are noteasy to satisfy in social research, because theirfulfillment encounters a series of obstacles,not all of which can be overcome.

There is no space to describe in depth theproblems and limits of statistical samplingtheory (see Gobo, 2004). I will brieflyexamine three limits only:

1 The difficulty of finding sampling frames (listsof population) for certain population sub-sets,because these frames are often not available.How, for example, can a random sample of the

unemployed be extracted if the whole list ofunemployed people is not available beforehand?It is true that many unemployed people are enrolledat job placement offices, but it is equally truethat not all unemployed people are so enrolled.Consequently, the majority of studies on particularsegments of the population cannot make useof population lists: consider studies on blue-collar workers, the unemployed, home-workers,artists, immigrants, housewives, pensioners, foot-ball supporters, members of political movements,charity workers, elderly people living alone, andso on.

2 The phenomenon of nonresponse. The conceptof random selection is theoretically very simpleand, thanks to the ideal-typical image of the box,quite clear to the general public. This clarity ismisleading, however, because human beings differfrom balls in a ballot box in two respects: theyare not immediately accessible to the researcher,and they are free to decide not to answer. In fact,account must be taken of the gap (which variesaccording to the research project) between theinitial sample (all the individuals about whom wewant to collect information) and the final sample(the cases about which we have been able to obtaininformation); the two sets may correspond, butusually some of the objects in the first sampleare not surveyed. As Groves and Lyberg (1988:191) pointed out, nonresponse error threatensthe characteristic which makes the survey uniqueamong research methods: its statistical inferencefrom sample to population. If the sample is at oddswith the probability model, nothing can be saidabout its general representativeness; that is, aboutwhether it truly reproduces all the characteristicsof the population.

3 Representativeness and generalizability: two sidesof the same coin? The social science textbooksusually describe generalizability as the naturaloutcome of a prior probabilistic procedure.In other words, the necessary condition forcarrying out a statistical inference is previoususe of a probability sample. It is forgotten,however, that probability/representativeness andgeneralizability are not two sides of the samecoin. The former is a property of the sample,whilst the latter concerns the findings of research.Put otherwise: between construction of a sampleand confirmation of a hypothesis there intervene acomplex set of activities which pertain to at leastseven different domains: (1) the trustworthiness ofoperational definitions and operational acts; (2)the reliability of the data collection instrument;

Page 3: g: pp ex ookskeyword4997-alasuutari4997-alasuutari-ch12 · 191; Alasuutari, 1995: 196–7), ‘moderatum generalization’ (Williams, 2000; Payne and Williams, 2005), and others.

[13:00 6/11/2007 4997-alasuutari-ch12.tex] Paper: a4 Job No: 4997 Alasuutari: Social Research Methods (SAGE Handbook) Page: 195 193–213

RE-CONCEPTUALIZING GENERALIZATION: OLD ISSUES IN A NEW FRAME 195

(3) the appropriateness of conceptualizations;(4) the accuracy of the researcher’s descriptions,categorizations, and/or measurements; (5) to besuccessful with observational (or field) relations;(6) the validity of the data; and (7) the validityof the interpretation. These activities, and theirrelative errors (called ‘measurement errors’ in theliterature), may impair the connection betweenprobability/representativeness and generalizabil-ity – a not infrequent occurrence in a complexactivity like social research.

These drawbacks do not signify that proba-bility sampling and statistical inference areinstruments by their nature unsuited to socialresearch. Rather, according to the researchsetting, they are instruments with certainpractical disadvantages that can sometimes beremedied and sometimes cannot.

In light of these difficulties, probabilitysampling cannot be propounded as the onlymodel suited to the generalization of findings.As Geertz (1973: 21) points out, it isnot only statistical inference that enablesthe move from ‘local truths to generalvisions.’ Moreover, as we have seen, notall sociological phenomena can be studiedwith rigorous application of the principles ofsampling theory, the consequence being thatthe adoption of other forms of generalizationhas been vital for social research: otherwise,an important part of sociological theory (thatbased on research conducted on a few cases oreven on haphazard or convenience samples asin the cases of, for example, Gouldner, Dalton,Becker, Goffman, Garfinkel, Cicourel) wouldnever have been produced.

GENERALIZATION AS SEEN BYQUALITATIVE METHODOLOGISTS

Qualitative researchers have taken up a varietyof positions in reaction to the pronouncementthat those who do not use probability samplescannot generalize. The most extreme ofthem have (paradoxically) on the one handaccepted the verdict but on the other dis-missed sampling as ‘a mere positivist worry’(Lincoln and Guba, 1979; Denzin, 1983).

Some have aptly pointed out that ‘mostsocial anthropological and a good deal ofsociological theorizing has been foundedupon case studies’ (Mitchell, 1983: 188) orhas been the product of exclusively theoreticalinquiry (without, that is, being groundedon systematic research). The more moderatehave complied with the injunction of thestatisticians but reconceptualized the problemby claiming that there are two types ofgeneralization (which they have termed invarious ways): enumerative (statistical) vs.analytic induction (Znaniecki, 1934: 236;Mitchell, 1983: 191); formalistic/scientific vs.naturalistic generalization (Stake, 1978: 6);distributive vs. theoretical generalization(Hammersley, 1992: 186ff; Williams, 2000:215; Payne and Williams, 2005: 296–7).The first type of generalization involvesestimating the distribution of particular fea-tures within a finite population; the second,eminently theoretical, is concerned with therelations among the variables in any sampleof the relevant kind (moreover, the populationof relevant cases is potentially infinite).The latter is usually based on identifyingcausal or essential relations among particularcategories, whose character is defined bythose relations, so that it is inferred that allinstances of those categories are involved inthe specified type of relation.

Even though some qualitative researchersmay privately agree with Znaniecki (1934:236–7) that analytical induction is the truemethod of science and it is the superior method(because it discovers the causal relations ofa phenomenon rather than only the probabilis-tic ones of co-occurrence), the idea that thereexist two types of generalization representsacceptance of the statisticians’ diktat. It alsorepresents acceptance of a ‘political’ divisioninto areas of competence: a compromisealready envisaged by some members of theChicago School, like Burgess (1927), whomaintained that statistics and case studieswere mutually complementary2 with theirown criteria of excellence.

The distinction between the two types ofgeneralization has been drawn with exem-plary clarity by Alberoni and colleagues, who

Page 4: g: pp ex ookskeyword4997-alasuutari4997-alasuutari-ch12 · 191; Alasuutari, 1995: 196–7), ‘moderatum generalization’ (Williams, 2000; Payne and Williams, 2005), and others.

[13:00 6/11/2007 4997-alasuutari-ch12.tex] Paper: a4 Job No: 4997 Alasuutari: Social Research Methods (SAGE Handbook) Page: 196 193–213

196 THE SAGE HANDBOOK OF SOCIAL RESEARCH METHODS

wrote in their Introduction to a study on108 political activists of the Italian Commu-nist Party and the Christian Democrat Party asfollows:

if we want to know, for instance, how manyactivists of both parties in the whole country arefrom families of Catholic or Communist tradition,(this) study is useless. Conversely, if we wantto show that family background is important indetermining whether a citizen will be an activist inthe Communist rather than Christian Democraticparty, this research can give the right answer. If wewant to find out what are and have been thepercentages of the different ‘types’ of activists […]in both parties, the study is useless, whereas if wewant to show that these types exist the study givesa certain answer […]. The study does not aim atgiving a quantitative objective description of Italianactivism, but it can aid understanding of someof its essential aspects, basic motivations, crucialexperiences and typical situations which gave birthto Italian activism and help keep it alive. (1967: 13)

The two generalizations are therefore made incompletely different ways3.

This moderate stance has been adopted bythe majority of qualitative methodologists,some of whom have sought to underscore thedifference between statistical and ‘qualitative’generalization by coining specific terms forthe latter. This endeavor has given rise toa welter of terms: ‘naturalistic generalization’(Stake, 1978: 6), ‘transferability’ (Lincolnand Guba, 1979), ‘translatability’ (Goetz andLeCompte, 1984), ‘analytic generalization’(Yin, 1984), ‘extrapolation’ (Mitchell, 1983:191; Alasuutari, 1995: 196–7), ‘moderatumgeneralization’ (Williams, 2000; Payne andWilliams, 2005), and others.

Five concepts of generalization

At least five different positions on thegeneralizability of research results can beidentified within the qualitative methodolog-ical tradition (see Ragin and Becker, 1992;Gomm, Hammersley and Foster, 2000).

The first position, the most radical ofthem, has been assumed by Lincoln andGuba (1979), Guba (1981), Guba and Lincoln(1982), and Denzin (1983). It adheres to thetraditional position that qualitative research is

an idiographic account which lays no claimto generalization (see Burrell and Morgan,1979). Norman K. Denzin is very explicit onthe matter:

The interpretivist rejects generalization as a goaland never aims to draw randomly selected samplesof human experience. For the interpretivist everyinstance of social interaction, if thickly described(Geertz, 1973), represents a slice from the life worldthat is the proper subject matter for interpretativeinquiry (…) Every topic (…) must be seen ascarrying its own logic, sense or order, structure, andmeaning. (Denzin, 1983: 133–4)4

Guba and Lincoln (1981: 62) likewise claimthat ‘it is virtually impossible to imagine anyhuman behavior that is not heavily mediatedby the context in which it occurs. One caneasily conclude that generalizations that areintended to be context free will have littlethat is useful to say about human behavior.’However, Guba and Lincoln moderate theirposition by introducing two novel elements:a new formulation of the concept of workinghypothesis proposed by Cronbach (1975), andthe new concept of transferability.

According to Cronbach, ‘when we giveproper weight to local conditions, any gen-eralization is a working hypothesis, nota conclusion’(1975: 125). Hence, Lincoln andGuba maintain,

local conditions (…) make it impossible to gener-alize. If there is a ‘true’ generalization, it is thatthere can be no generalization. And note that‘working hypotheses’ are tentative both for thesituation in which they first uncovered and forother situations; there are always differences incontext from situation to situation, and even thesingle situation differs over time. (1979, reprinted2000: 39)

They now make their own proposal,which has become well-known in qualitativeresearch:

How can one tell whether a working hypothesisdeveloped in Context A might be applicable in Con-text B? We suggest that the answer to that questionmust be empirical: the degree of transferability isa direct function of the similarity between the twocontexts, what we shall call ‘fittingness’. Fittingnessis defined as the degree of congruence betweensending and receiving contexts. If Context A andContext B are ‘sufficiently’ congruent, then working

Page 5: g: pp ex ookskeyword4997-alasuutari4997-alasuutari-ch12 · 191; Alasuutari, 1995: 196–7), ‘moderatum generalization’ (Williams, 2000; Payne and Williams, 2005), and others.

[13:00 6/11/2007 4997-alasuutari-ch12.tex] Paper: a4 Job No: 4997 Alasuutari: Social Research Methods (SAGE Handbook) Page: 197 193–213

RE-CONCEPTUALIZING GENERALIZATION: OLD ISSUES IN A NEW FRAME 197

hypotheses from the sending originating contextmay be applicable in the receiving context. (Lincolnand Guba, 1979, reprinted 2000: 40)

However, transferability is not an inferentialprocess performed by the researcher (whocannot know all the other contexts ofresearch). Rather, it is a choice made by thereader, who on the basis of argumentativelogic and a thick description (of the casestudy) produced by the researcher, maydecide (on his/her own responsibility – seeGomm, Hammersley and Foster, 2000: 102)to transfer this knowledge to other situationsthat she/he deems similar (Lincoln and Guba,1979, reprinted 2000: 40). The reader, basingthis on the persuasive power of the argumentsused by the researcher, decides on thesimilarity between the (sending) context of thecase studied and the (receiving) contexts towhich the reader him/herself intends to applythe results (Guba and Lincoln, 1982: 246).

To conclude, these authors are convincedthat ‘generalizations are impossible sincephenomena are neither time- nor context-free’; however, ‘some transferability of thesehypotheses may be possible from situation tosituation, depending on the degree of temporaland contextual similarity’ (Guba and Lincoln,1982: 238).

A second, more moderate, approach hasbeen proposed by Stake (1978: 1994), whoargues that the purpose of case studies isnot so much to produce general conclusionsas to describe and analyze the principalfeatures of the phenomenon studied. If thesefeatures concern an emblematic case ofpolitical, social, or economic importance (forexample, the decision-making procedures ofa large institution like the US Departmentof Defense), the ‘intrinsic case study’ willper se produce results of indubitable intrinsicrelevance5, even though they cannot begeneralized in accordance with the canons ofscientific induction:

naturalistic generalization, arrived at by recognizingthe similarities of objects and issues in and out ofcontext and by sensing the natural covariations ofhappenings. To generalize this way is to be bothintuitive and empirical, and not idiotic.

Naturalistic generalizations develop within a personas a product of experience. They derive fromthe tacit knowledge of how things are, whythey are, how people feel about them, andhow these things are likely to be later or inother places with which this person is familiar.They seldom take the form of predictions butthey lead regularly to expectations (…) Thesegeneralizations may become verbalized, passing ofcourse from tacit knowledge to propositional; butthey have not yet passed the empirical and logicaltests that characterize formal (scholarly, scientific)generalizations. (Stake, 1978: 6)

A third position, which is contiguousto the intrinsic case study, has been putforward by Connolly (1998). It starts fromthe distinction between extensive vs. intensivestudies. The aim of the former (like casestudies) is to identify statistically significantand therefore generalizable causal relations;the aim of the latter is to reconstruct in detailthe mechanisms that connect cause and effect.Like Stake, Connolly relieves the case studyof responsibility for formal generalization, buthe gives it a task complementary to such gen-eralization, explaining (via the mechanisms)correlations whose statistical significance hasalready been documented by other studies.

These three positions have a commonbasis consisting in the concept of ‘theoreticalsampling’ proposed by Glaser and Strauss(1967), Schatzman and Strauss (1973) andStrauss (1987): when we do not possesscomplete information about the population,cases are selected according to their status onone or more properties identified as the subjectmatter for research.As Mason writes, ‘theoret-ical sampling is concerned with constructinga sample which is meaningful theoreticallybecause it builds in certain characteristicsor criteria which help to develop and testyour theory and explanation’ (1996: 94). AndStrauss and Corbin are very explicit on theconcept of generalization:

in terms of making generalization to a larger popu-lation, we are not attempting to generalize assuch but to specify […] the condition under whichour phenomena exist, the action/interaction thatpertains to them, and the associated outcomes orconsequences. This means that our theoretical for-mulation applies to these situation or circumstances

Page 6: g: pp ex ookskeyword4997-alasuutari4997-alasuutari-ch12 · 191; Alasuutari, 1995: 196–7), ‘moderatum generalization’ (Williams, 2000; Payne and Williams, 2005), and others.

[13:00 6/11/2007 4997-alasuutari-ch12.tex] Paper: a4 Job No: 4997 Alasuutari: Social Research Methods (SAGE Handbook) Page: 198 193–213

198 THE SAGE HANDBOOK OF SOCIAL RESEARCH METHODS

but to no others. (1990: 191, bold in the originaltext)

In other words the aim is not to generalizeto some finite population but to developtheoretical ideas that will have generalvalidity.

More practical are authors who engagein ‘evaluation research’ (Cronbach, 1982;Pawson and Tilley, 1997). These ground theirreasoning on the notion of the cumulabilityof knowledge: case study after case study,in the course of time in a particular sectorof research, there accumulates a repertoireor inventory of the possible forms thata particular object of study may assume.As Pawson and Tilley (1997: 119–20) putit, in polemic with Guba and Lincoln, whatcan be transferred between studies are not‘lumps of cases’ but ‘sets of ideas’ whichenable understanding of general mechanisms.In other words, cumulability is the prelude forqualitative generalizability.

The final position, and perhaps the oldestof them, is represented by Znaniecki’s methodof analytic induction. The purpose of analyticinduction is to uncover causal relationsthrough identification of the essential charac-teristics of the phenomenon studied. To thisend, the method starts not with a hypothesisbut with a limited set of cases from which aninitial explanatory hypothesis is then derived.If the initial hypothesis fails to be confirmedby one case, it is revised. Additional cases ofthe same class of phenomena are then selected.If the hypothesis is not confirmed by thesefurther cases, the conceptual definition of thephenomenon is revised. The process continuesuntil the hypothesis is no longer refutedand further study tells the researcher nothingnew (Znaniecki, 1934: 236ff). The innerlogic of analytic induction derives fromMill’s ‘method of agreement’ and ‘method ofdifference.’

There are several variants of Znaniecki’smethod of analytic induction. One of them isMitchell’s (1983) critical case study approach.

Analytic induction revisited has been alsowidely used in comparative studies based ona small numbers of cases ‘when little more

than a handful of nations or organizations –sometimes even fewer – are compared withrespect to the forces driving a societaloutcome such a political development oran organizational characteristic’ (Lieberson,1992, reprinted in 2000: 208).

The unavoidableness ofgeneralization

Sampling and generalizing are unavoidablepractices because, even before being sci-entific, they are everyday life activitiesdeeply rooted in thought, language, andpractice (Gobo, 2004). With regard to thought,cognitive psychologists have demonstratedthe tendency of people to generalize onthe basis of a few observed characteristicsor events, a process called the heuristic ofrepresentativeness by Kahneman and Tversky(1972) and Tversky and Kahneman (1974).With regard to the world of language, thesame function is performed, as Becker hasstated, by ‘synecdoche, a rhetorical figure inwhich we use a part for something to refer thelistener or reader to the whole it belongs to’(1998: 67). Finally, in the world of action, theseller shows a sample of cloth to the customer;in a paint shop the buyer skims through thecatalogue of color shades in order to selecta paint; the buyer tastes in order to choose awine or a cheese; the teacher asks a studentquestions to assess his or her knowledgeabout the syllabus. In everyday life, socialactors constantly sample and generalize.As Gomm, Hammersley and Foster point out,‘we all engage in naturalistic generalizationsroutinely in the course of our life, and thismay take the form of empirical generalizationas well as of informal theoretical inference.Given this, there is no reason in principlewhy case study research should not providethe basis for empirical generalization’ (2000:104). This is also because the unavoidabilityof generalization is epistemologically andreflexively founded. As Gomm, Hammersleyand Foster acutely observe:

the very meaning of the word ‘case’ implies thatwhat it refers to is a case [instance or example]

Page 7: g: pp ex ookskeyword4997-alasuutari4997-alasuutari-ch12 · 191; Alasuutari, 1995: 196–7), ‘moderatum generalization’ (Williams, 2000; Payne and Williams, 2005), and others.

[13:00 6/11/2007 4997-alasuutari-ch12.tex] Paper: a4 Job No: 4997 Alasuutari: Social Research Methods (SAGE Handbook) Page: 199 193–213

RE-CONCEPTUALIZING GENERALIZATION: OLD ISSUES IN A NEW FRAME 199

of something. In other words, we necessarilyidentify cases in terms of general categories (…)the idea that somehow cases can be identifiedindependently of our orientation to them is false.It is misleading to talk of the uniqueness of cases(…) we can only identify their distinctiveness on thebasis of a notion of what is typical or representativeof some categorial group or population. (Gomm,Hammersley and Foster, 2000: 104)

The unavoidableness of generalizing issuch that ‘in practice, much case studyresearch has in fact put forward empiricalgeneralizations’ (Gomm, Hammersley andFoster, 2000: 98) and ‘current qualitativeresearchers often seem to produce [general-ization] unconsciously’ (Payne and Williams,2005: 297).

FOR AN IDIOGRAPHIC SAMPLINGTHEORY

The thesis (which I have called ‘moderate’)that there are two types of generalizationhas had the indubitable merit of cooling thedispute with quantitative methodologists andof legitimating two ways to conduct research.However, this political compromise has alsohad a number of harmful consequences.

First, it has not stimulated reflection on howto emancipate ‘qualitative’ generalizationfrom its subordination to statistical infer-ence. Traditional methodologists continueto attribute inferior status to qualitativeresearch, on the grounds that although it canproduce interesting results, they have a limitedextension only. This long-standing positivistprejudice has been recently reinforced bythe extreme positions taken up by Lincolnand Guba (1979) and Denzin (1983). Theirinsistence that generalization in interpretativeresearch is impossible, and that their workis not intended to produce scientific general-izations, paradoxically fits perfectly with theequally intransigent position of quantitativemethodologists. As Gomm, Hammersley andFoster, observe, ‘to deny the possibility ofcase studies providing the basis for empiricalgeneralizations is to accept the views of theircritics too readily’ (2000: 98). Even though

it may be a coincidence, the fact thatEgon Guba was a well-known statisticianbefore he became a celebrated qualitativemethodologist may have heightened theinflexibility of the debate. Consequently, anunexpected consequence of this paradox isthat interpretivism has been just as positiviston qualitative generalization as quantitativemethods have.

Second, the concept of theoretical samplinghas failed to address the problem of sam-ple representativeness which Denzin himself(1971) considered so important. Likewise,the concept of transferability provides ‘noguidance for researchers about which case tostudy – in effect, it implies that any case maybe as good as any other in this respect’(Gomm,Hammersley and Foster, 2000: 101). Thisomission has been pedagogically harmfulbecause it has permitted several generationsof qualitative researchers entirely to neglect –in the belief that ‘anything goes’ – this aspectof the investigative process.

Third, an opportunity has been missed torediscuss the entire issue, addressing it in morepractical (and not solely theoretical) termswith a view to developing a new samplingtheory: an idiographic theory, joint and equalwith statistical theory, and which remedies aseries of ancestral misunderstandings:

denial of the capacity of case study research tosupport empirical [distributive] generalization oftenseems to rest on the mistaken assumption that thisform of generalization requires statistical sampling.This restricts the idea of representation to itsstatistical version; it confuses the task of empiricalgeneralization with the use of statistical techniquesto achieve that goal. While those techniques area very effective basis for generalization, they arenot essential. (Gomm, Hammersley and Foster,2000: 104)

Sampling in some contemporarysciences

The first step in this endeavor is to surveycertain disciplines – paleontology, archaeol-ogy, geology, ethology, biology, astronomy,anthropology, cognitive science, linguistics(which for some scientists is more reputablethan sociology) – and see how they have

Page 8: g: pp ex ookskeyword4997-alasuutari4997-alasuutari-ch12 · 191; Alasuutari, 1995: 196–7), ‘moderatum generalization’ (Williams, 2000; Payne and Williams, 2005), and others.

[13:00 6/11/2007 4997-alasuutari-ch12.tex] Paper: a4 Job No: 4997 Alasuutari: Social Research Methods (SAGE Handbook) Page: 200 193–213

200 THE SAGE HANDBOOK OF SOCIAL RESEARCH METHODS

tackled the problems of representativenessand generalizability. In certain respects, theseare disciplines akin to qualitative research,for they work exclusively on few cases andhave learnt to make a virtue out of necessity.As Becker writes:

Archeologists and paleontologists have this prob-lem to solve when they uncover the remnants ofa now-vanished society. They find some bones,but not a whole skeleton; they find some cookingequipment, but not the whole kitchen; they findsome garbage, but not the stuff of which thegarbage is the remains. They know that they arelucky to have found the little they have, becausethe world is not organized to make life easy forarcheologists. So they don’t complain about havinglousy data. (1998: 70–1)

For reasons of space, it is not possible here toprovide an exhaustive account of how thesedisciplines have dealt with the above issues.But by way of example, consider the followingstudy, which is one of the dozens published onthe subject. It appeared in the journal Natureon January 23, 2003.

The scientist Xing Xu and colleagues(2003) of the Institute of Vertebrate Palaeon-tology, Beijing, had found six fossils inthe province of Liaoning, North China.The impression left in the rock was of twopairs of wings and a long feathered tailof what appeared to be a Microraptor gui:a dinosaur less than one meter in lengthwhich lived in that region of China around130 million years ago. According to itsdiscoverers, the fossil was the missing linkbetween terricolous dinosaurs and modernbirds, the intermediate evolutionary stage forwhich scientists had long been searching.The discovery has fuelled the debate amongpaleontologists on the origin of flight. Whilstthe close kinship between birds and dinosaursis accepted by almost all scientists, there ismuch disagreement on the evolutionary stagesthat led to winged flight. The predominanttheory is that wings began to develop, notto enable flight but to help the ancestorsof birds to run faster. The small dinosaurdiscovered in China instead appeared tosupport the opposite hypothesis, namely thatthe direct ascendants of birds were animals

which climbed trees and used wings toglide back to earth. This was the theorypropounded, for example, by the Americannaturalist, William Beebe, who as earlyas 1915 had predicted the existence offeathered dinosaurs exactly like Microraptorgui. However, the British journal urgedcaution when evaluating the importance ofthe discovery: the Microraptor could also bean evolutionary blind ally which had not leftdescendants.

There are therefore numerous disciplineswhich work on a limited number of cases,and do so consciously; in fact, there isanimated discussion within them on sam-pling and generalizability. Moreover, thisprocedure is adopted by other disciplinesas well: for instance, biology, astrophysics,history, genetics, anthropology, linguistics,cognitive science, psychology (whose theo-ries are largely based on experiments, andtherefore on research conducted on non-probabilistic samples consisting of psychol-ogy students). Why, we may ask, is thisprocedure acceptable for monkeys, rocks,and cells but not for human beings? Whydo the majority of disciplines work with/onnon-probability samples (regarded as beingjust as representative of their relative pop-ulations and therefore as producing gen-eralizable results) while in sociology thisis not possible? Why can a geneticist likeLuca Cavalli Sforza of Stanford Universityargue that the evolution of language hashad a direct impact on our genetic her-itage, while in sociology a similar claimwould require very different methodologicalsupport? The majority of these disciplinesstart from the assumption that their objectsof study possess quasi-invariant states onthe properties observed: that is, their stateswith respect to a property (e.g. size of thebrain or the physique of a hominid) varylittle and slowly among members of theclass. Consequently, these disciplines areunconcerned about their use of only a handfulof cases to draw inferences and generaliza-tions about thousands of people, animals,plants, and other objects. Moreover, sciencestudies the individual object/phenomenon not

Page 9: g: pp ex ookskeyword4997-alasuutari4997-alasuutari-ch12 · 191; Alasuutari, 1995: 196–7), ‘moderatum generalization’ (Williams, 2000; Payne and Williams, 2005), and others.

[13:00 6/11/2007 4997-alasuutari-ch12.tex] Paper: a4 Job No: 4997 Alasuutari: Social Research Methods (SAGE Handbook) Page: 201 193–213

RE-CONCEPTUALIZING GENERALIZATION: OLD ISSUES IN A NEW FRAME 201

in itself but as a member of a broaderclass of objects/phenomena with particularcharacteristics/properties.

FOUR PROPOSALS FOR ANIDIOGRAPHIC SAMPLING THEORY

The above survey of disciplines midwaybetween the natural sciences and the socialscience yields a number of suggestionsfor formulation of an idiographic samplingtheory. They can be summarized in thefollowing four steps:

(a) abandon the (statistical) principle of probability;(b) recover the (statistical) principle of variance;(c) pay renewed attention to the units of analysis;(d) identify social regularities.

Representativeness withoutprobability

The use of probability samples does notautomatically signify the use of representativesamples. Random and representative are termsneither synonymous nor necessarily inter-related. ‘Randomness’ concerns a particularprocedure used to select the cases to include ina sample, while ‘representativeness’ concernsthe outcome of the selection. One mayquestion whether the former is the obligatorypath for the latter. Nor do representativenessand probability form a natural pair, since itmay be possible to construct a representativesample using other procedures. Qualitativeresearch (or at least a part of it) doesnot relinquish the aim of working withrepresentative samples; it only rejects theobligatory nexus between probabilistic andrepresentative (on the one hand), or betweenrandomness and representativeness (on theother).

It is therefore not necessarily the casethat a researcher must choose between an(approximately) random sample or an entirelysubjective one – or between a sample whichis (even only) partially probabilistic and oneabout whose representativeness absolutely

nothing can be said. Between the rationalismand the postmodern nihilism underlying thesetwo positions, one may attempt to addressthe problem in practical terms, doing soby examining the nature of the units ofanalysis considered, rather than adhering tostandard procedural rules. As stressed byRositi (1993: 198), we may reasonably doubtthe generalizability of findings from

studies of 1,000–2,000 cases which claim to samplethe whole population. We have to wonder if weshould prefer such samples with such aims […].Studies with samples of 100–200 conversationalinterviews, structured to ‘describe’ variables ratherthan a population are definitely more suitable fora new model of studying society. (1993: 198)

Variance: From (general) principle to(local) practice

The second step is to recover the (statistical)principle of variance, which has receivedless attention than the probability principle.Contrary to the latter’s standardizing intentand automatist inclination (which are amongthe reasons for its success), variance isa criterion which requires the researcher toreason, to conduct contextual analysis, andto take local decisions. Under the varianceprinciple,

in order to determine the sample size, the statisticsmust first know the range of variance that theresearcher intends to measure (at least in sufficientlyclose terms) because it is likely that, if the rangeof variance of variable X is high, n [the number ofindividuals to interview] will be high, whereas if therange of variance is restricted (for example to onlytwo modalities), n may be very restricted as well.(Capecchi, 1972: 50)

Hence, it is more likely that a sample will be aminiature of the population if that populationis tendentially homogeneous; and it is lesslikely to be so if the reference populationis tendentially heterogeneous. Consequently,if the variance is high, the researcher willrequire a large number of cases (in order toinclude every dimension of the phenomenonstudied in his/her sample). If, instead, the vari-ance is low, the researcher will presumably

Page 10: g: pp ex ookskeyword4997-alasuutari4997-alasuutari-ch12 · 191; Alasuutari, 1995: 196–7), ‘moderatum generalization’ (Williams, 2000; Payne and Williams, 2005), and others.

[13:00 6/11/2007 4997-alasuutari-ch12.tex] Paper: a4 Job No: 4997 Alasuutari: Social Research Methods (SAGE Handbook) Page: 202 193–213

202 THE SAGE HANDBOOK OF SOCIAL RESEARCH METHODS

need only a few cases, and in some instancesonly one. In other words,

it is important to recognize that the greater theheterogeneity of a population the more problematicare empirical generalizations based on a single case,or a handful of cases. If we could reasonably assumethat the population were composed of more or lessidentical units, then there would be no problem.(Gomm, Hammersley and Foster, 2000: 104)

As also Payne and Williams (2005: 306–7)point out:

the breadth of generalization can be extensiveor narrow, depending on the nature of thephenomenon under study and our assumptionsabout the wider social world (…) [hence] thegeneralization may claim high or lower levels ofprecision of estimates (…) [and it] will be conditionalupon the ontological status of the phenomena inquestion. We can say more, or make stronger claimsabout some things than others. A taxonomy ofphenomena might look like this: 1◦ physical objectsand their social properties; 2◦ social structures;3◦ cultural features and artefacts; 4◦ symbols;5◦ group relationships; 6◦ dyadic relationship;7◦ psychological dispositions/behaviour (…) Thisoutline taxonomy demonstrates that generaliza-tions depend on what levels of social phenomenaare being studied.

The conversation analyst Harvey Sacks(1992, vol. 1: 485, quoted in Silverman,2000: 109) reminds us of the anthropologistand linguist Benjamin Lee Whorf, whowas able to reconstruct Navajo grammarby extensively interviewing only one nativeIndian speaker. Grammars usually have lowvariance. However, had Whorf wanted tostudy how the Navajo educated their children,entertained themselves, etc., he would (per-haps) have found greater variance in thephenomenon and would have needed morecases. On this logic, the formal criteriathat guide sampling are more informed byand embedded in sociological (rather thanstatistical) reasoning based on contingentreflection about the dimensions specific to thephenomenon investigated and the knowledgeobjectives of the research.

Moreover, as said, an authoritative partof sociological theory and a large partof anthropological theory are based on

the case study: the quintessence of non-probability sampling. The research studies byAlvin G. Gouldner and Melvin Dalton belongto this category. For example, Gouldner(1954) studied a gypsum mine situatedclose to the university where he taught(a convenience sample, therefore6). In hismethodological appendix, Gouldner reportedthat his team conducted 174 interviews –and therefore on almost all the population(precisely 77%). One hundred and thirty-twoof these 174 interviews were conducted witha ‘representative sample’ of the blue-collarworkers at the company, for which purposeGouldner used quota sampling stratified byage, rank, and tasks. He then constructedanother representative sample of 92 blue-collar workers, to whom a questionnaire wasadministered.

Dalton (1959), who was a companymanager at that time, conducted covertobservation at Milo and Fruhuling, thefictitious names of two American compa-nies for which he worked as a consultant(again a convenience sample, therefore).The ethnologist De Martino (1961) observed21 people suffering from tarantism disease;Goffman (1961) stayed for several monthsat a psychiatric hospital; the anthropologistGeertz (1972) attended 57 cock fights; Sacksand colleagues described the mechanicsof conversational interaction by analyzinga few telephone calls; the anthropologistCrapanzano (1980) studied Moroccan socialrelations through the experience of Tuhami,a tilemaker. The anthropologist Griaule(1948) reconstructed the cosmology of theDogon, a tribe in Mali, by questioning onlya small group of informants; Bourdieu’sbook (1993) on professions was based on50 interviews with policewomen, temporaryworkers, attorneys, blue-collar workers, civilservants, and unemployed workers.

Why, one may ask, have such circum-scribed studies given rise to such wide-ranging theories? In other words, why havethey been generalized to other contexts? I shallanswer these questions later. For the momentI would stress (and avoid) the danger ofthe nihilistic or postmodern drift implied

Page 11: g: pp ex ookskeyword4997-alasuutari4997-alasuutari-ch12 · 191; Alasuutari, 1995: 196–7), ‘moderatum generalization’ (Williams, 2000; Payne and Williams, 2005), and others.

[13:00 6/11/2007 4997-alasuutari-ch12.tex] Paper: a4 Job No: 4997 Alasuutari: Social Research Methods (SAGE Handbook) Page: 203 193–213

RE-CONCEPTUALIZING GENERALIZATION: OLD ISSUES IN A NEW FRAME 203

by this approach, where any sample mayserve and it is not worth bothering too muchabout it. Instead, at a certain point of theinquiry, giving clear definition to the unitsof analysis (an operation performed beforethe cases are selected, and therefore beforethe sample is constructed) is of extremeimportance if the research is not to be botchedand empirically inconsistent. On analyzinga series of Finnish studies on ‘artists,’Mitchelland Karttunen (1991) found that the resultsdiffered according to the definition given to‘artist’ by the researchers, a definition whichthen guided construction of the sample. Insome studies, the category ‘artist’ included(i) subjects who defined themselves asartists; (ii) those permanently engaged inthe production of works of art; (iii) thoserecognized as artists by society at large; and(iv) those recognized as such by associationsof artists. The obvious consequence was that itwas subsequently impossible to compare theresults of these studies.

Units of analysis

The standard practice in sociology andpolitical science is to choose clearly definedand easily detectable individual or col-lective units: persons, households, groups,associations, movements, parties, institutions,organizations, regions, or states. The consis-tency of these collective subjects is vague.In practice, members of these groups areinterviewed individually: the head of thefamily, the human resources manager, thestatistics department manager, and so on.

This means that the sampling unit (e.g. thefamily) is different from the observationalunit (i.e. the single respondent as a mem-ber of the family). Only a focus groupcan (at least to some extent) preserve theintegrity of the collective subject. Instead,choosing individuals implies an atomisticrather than organic conception of society(Burgess, 1927), whose structural elementsare taken for granted or reckoned to bemirrored in the individual (Galtung, 1967:37), while the sociological tradition thatgives priority to relations over individuals isneglected. As a consequence, the followingmore dynamic units are neglected as well:

• beliefs, attitudes, stereotypes, opinions;• emotions, motivations;• behaviors, social relations, meetings, interactions,

ceremonies, rituals, networks;• cultural products (such as pictures, paintings,

movies, theatre plays, television programs);• rules and social conventions;• documents and texts (historical, literary, journal-

istic);• situations and events (wars, elections).

Hence, ‘a reliable sampling model that rec-ognizes interaction must be adopted [so thatsampling is conducted on] interactive units(such as social relationships, encounters,organizations)’ (Denzin, 1971: 269).

The researcher should focus his/her inves-tigation on these kinds of units, not onlybecause social processes are more easilydetectable and observable, but also becausethese units allow more direct and deeperanalysis of the characteristics observed.

Consider the following illustrative example. Assume that we want to study work practices at callcenters, which are technology-intensive workplaces. In Italy, it has been calculated that there were1350 call centers in 2002. In order to construct a probability and representative sample, we mayproceed in two ways: randomly extract a certain number of cases from the population list (which ispossible because a complete list can be obtained from the Chambers of Commerce), or construct aproportional stratified sample. In this latter case, we must first classify call centers according to theproperties that interest us:

• the ownership of the organization, so that we have private call centers (e.g. Vodafone), public ones (e.g.the 911 emergency helpline), and non-profit ones;

Page 12: g: pp ex ookskeyword4997-alasuutari4997-alasuutari-ch12 · 191; Alasuutari, 1995: 196–7), ‘moderatum generalization’ (Williams, 2000; Payne and Williams, 2005), and others.

[13:00 6/11/2007 4997-alasuutari-ch12.tex] Paper: a4 Job No: 4997 Alasuutari: Social Research Methods (SAGE Handbook) Page: 204 193–213

204 THE SAGE HANDBOOK OF SOCIAL RESEARCH METHODS

• the ‘vocation,’ so that we have call centers that are ‘generalist’ (in the sense that they provide a varietyof services) or ‘vertical’ (i.e. dedicated to only one service, e.g. credit recovery);

• membership or otherwise of the organization for which the service is provided, so that we have callcenters ‘internal’ to the company, or ones to which the work is outsourced;

• the classic variables such as size of the organization (small, medium, large), geographical location(north-west, north-east, centre, south, islands), etc.;

• the type of service furnished.

Note that many of these properties are mutually exclusive, so that the sampling decision must becarefully pondered. In these cases, the usual practice is for the researcher to base the probabilitysampling on the first property. However, this may be sociologically inadequate if the researcher’sinterest is in work practices, because these cannot be accessed via the variable ‘ownership.’ For someauthors (e.g. Capecchi, 1972), representativeness does not seem to transfer from one property toanother. Put otherwise: it is not the variance of the ownership of call centers that interests ushere, but the variance of work practices. It might be more satisfactory to choose property (e).Experience of this sector of inquiry (but also the literature, previous research, interviews withexperts or operators in the sector, etc.) shows that call centers mainly provide the following services:counseling, credit recovery, marketing, interviewing, and advertising. Constructing a probabilitysample on this classification is practically impossible because a population list for each of theseactivities does not exist. The only alternative is to use the method outlined in the previous section.Again on the basis of experience, we note that only the first of these five activities has substantialvariance, while the four latter seem to have low variance. In fact, the counseling provided by callcenters is multiform: it consists of information, technical assistance, psychological help or support,medical advice, or therapy. Consequently, in order to preserve the representativeness of the sample,we must sample several cases for the specific work practice of counseling. If we have insufficientresources to collect the necessary number of cases, we can restrict our research to only someactivities. Other studies in the future will account for the rest.

It is evident that representativeness isnot always possessed by the sample whenresearch begins. It is a resource also acquiredex post, progressively and iteratively, researchproject after research project, with the gradualaccumulation of expertise. This definition ofrepresentativeness seems somehow to tie thisproperty to the relation between the resultsobtained by an individual research projectand the experience of the researcher whoconducts it.

In search of social regularities

I now turn to the final aspect of the entirequestion. There are three broad criteria whichserve to orient the construction of a non-probability sample; and to each of themcorresponds a particular form of reasoningalternative to inductive or statistical inference:deductive inference, comparative inference,and emblematic case.

The three criteria impose different cogni-tive objectives, and they are used according tothe type of generalization that the researcher

wants to make. The first two criteria arein some way opposed to each other: com-parative inference maximizes the probabilityof extracting odd cases; deductive inferenceselects only odd (deviant) cases. Theoreticalinference instead concentrates on emblematiccases, focusing on social similarities.

Deductive inferenceThe first criterion consists of the choice ofa critical or deviant case which can be used(à la Popper) to prove the refutability of anaccredited or standard theory. An outstandingexample of its application is provided byGoldthorpe et al.’s study (1968) of workersin the town of Luton. The distinctive featureof this inferential process is that it starts froma theory of which it intends to prove theimplausibility: in this case the embourgeoise-ment of the working class. The theory is testedagainst a case comprising the largest number(and the greatest intensity) of its foundingproperties or requirements of this theory. If, inthese optimal conditions, the consequencesforeseen by the theory do not ensue, it is

Page 13: g: pp ex ookskeyword4997-alasuutari4997-alasuutari-ch12 · 191; Alasuutari, 1995: 196–7), ‘moderatum generalization’ (Williams, 2000; Payne and Williams, 2005), and others.

[13:00 6/11/2007 4997-alasuutari-ch12.tex] Paper: a4 Job No: 4997 Alasuutari: Social Research Methods (SAGE Handbook) Page: 205 193–213

RE-CONCEPTUALIZING GENERALIZATION: OLD ISSUES IN A NEW FRAME 205

extremely unlikely that the theory will workin all those empirical cases where thoserequirements are more weakly present. Hencethe theory is falsified, and its inadequacycan be legitimately generalized. When thecritical case study procedure is used, the casesare selected according to their explanatorypower, rather than according to the criteria ofprobability theory or their typicality (Mitchell,1983: 207, 209). Moreover, the legitimacy ofthe generalization (of the scant explanatorycapacity of the theory just falsified) dependsnot only on the cogency of the rhetoricalargument but also on the strength of theconnections established between theory andobservations.

There are many other important studies(which follow in a very broad sense thePopperian approach) which have focused ondeviant cases in order to understand standardbehavior: Goffman (1961) on ceremonies andrituals in a psychiatric clinic; Cicourel andBoise (1972) on the interpersonal communi-cations of deaf children; Garfinkel (1967) onachievement of sex status in an ‘intersexed’person; Pollner and Winkler (1985) oninteractions in a family with a mentallyretarded child; and many others.

This criterion can also be used to exploresubcultures or emergent or avant-garde phe-nomena which may become dominant orsignificant in the future, although at presentthey are still marginal: see Festinger et al.(1956) on millenial groups after their pre-dicted date for the end of the world hadpasses; Becker (1953) on marijuana smokers;Hebdige (1979) on style groups like mods,punks, skinheads; Fielding (1981) on right-wing political movements.

The deviant case can also be used to provethe refutability and falsifiability of a well-known and received theory, as in Rosenhan’s(1973) study on the medical-organizationalorigin of psychiatric illness, or the already-cited study by Goldthorpe et al. (1968) onblue-collar workers in the town of Luton. Thiscriterion (which is widely applied in biology,astrophysics, history, genetics, anthropology,linguistics, paleontology, archaeology, ethol-ogy, geology) does not determine the extent

to which a phenomenon is widespread inthe population. It only directs the scientificcommunity’s attention to the phenomenon’sexistence and the need to revise the dominanttheory. The generalization to the populationcomes about by default: that is by virtue ofthe non-occurrence of the event foreseen bythe theory under examination.

Obviously, the generalization must becarefully thought through. Otherwise, thedanger arises of lapsing into the determinismto which Popper’s falsificationism is suscep-tible. As Lieberson (1992: 212) emphasizes:

it is very difficult to reject a major theory because itappears not to operate in some specific setting. Oneis wary of concluding that Max Weber was wrongbecause of a single deviation in some inadequatelyunderstood time or place. In the same fashion, wewould view an accident caused by a sober driver asfailing to disprove the notion that drinking causesautomobile accidents.

Comparative inferenceThe second criterion is used to make gen-eralizations similar to statistical inferences,but without employing probability criteria.This can be done by identifying caseswithin extreme situations as well as certaincharacteristics, or cases within a wide range ofsituations in order to maximize variation, thatis, to have all the possible situations in orderto capture the heterogeneity of a population.We can choose two elementary schoolswhere, from press reports, previous studies,interviews or personal experiences, we knowwe can find two extreme situations: in thefirst school there are severe difficulties ofintegration between natives and immigrants,while in the second there are virtually none.We can also pick three schools: the first withsevere integration difficulties; the second withaverage difficulties; and the third with rareones. In the 1930s and 1940s, the Americansociologist W. Lloyd Warner (1898–1970) andhis team of colleagues and students carried outstudies on various communities in the UnitedStates. When Warner set about choosing thesamples, he decided to select communitieswhose social structures mirrored importantfeatures of American society. He chose four

Page 14: g: pp ex ookskeyword4997-alasuutari4997-alasuutari-ch12 · 191; Alasuutari, 1995: 196–7), ‘moderatum generalization’ (Williams, 2000; Payne and Williams, 2005), and others.

[13:00 6/11/2007 4997-alasuutari-ch12.tex] Paper: a4 Job No: 4997 Alasuutari: Social Research Methods (SAGE Handbook) Page: 206 193–213

206 THE SAGE HANDBOOK OF SOCIAL RESEARCH METHODS

communities (given assumed names): a cityin Massachusetts (Yankee City) ruled by tradi-tions on which he wrote five volumes; a lonelycounty of Mississippi (Deep South, 1941); aChicago black district (Bronzetown, 1945);and a city in the Midwest (Jonesville, 1949).

In comparative inferences, the cases areselected by making careful comparisons: firstby seeking to find cases which represent all theforms of heterogeneity in a target population,and then by controlling whether they aresufficiently homogeneous with the type thatone wants to represent. In this difficult butimportant analysis,

it is necessary to compare the characteristics ofthe case(s) being studied with available informationabout the population to which generalizationis intended (…) we are suggesting that whereinformation about the larger population (or aboutoverlapping populations) is available, it should beused. If it is not available, then the potentialrisks involved in generalization still need to benoted, preferably via specification of likely typesof heterogeneity that could render the findingsunrepresentative. (Gomm, Hammersley and Foster,2000: 105–106)

We are therefore very distant from theconcepts of naturalistic generalization andtransferability, which are unsatisfactory invarious respects, for they ‘do not providea sound basis for the design, or justification,of case study research’ (Gomm, Hammersleyand Foster, 2000: 102). They assign the readera function which should also be performedby the researcher (assuming responsibilityfor affirming the generalizability of thestudy’s findings). They therefore relieve theresearcher of responsibility for the carefulselection of cases on the basis of the varianceprinciple, and not solely on the basis ofthe theoretical significance of theoreticalsampling and of all research on variables(rather than cases). As Schofield (1990) notes,all too often cases seem to be chosen forreasons of convenience and are thereforeatypical in various respects.

The emblematic caseIf we bear the variance principle in mind,there emerges a third major criterion for

the construction of a sample: the typical oremblematic case.

Gouldner’s case studies (1954) on bureau-cratization in medium-sized firms, or that byCicourel (1968) on the relational constructionof the figure of the juvenile delinquent, havebeen considered amply generalizable (by bothresearchers and readers), probably becausethey were typical cases and consequentlygrasped structural aspects of the social actionin the organizations studied. Nor should weforget that the question of generalizabilityis closely tied to the phenomenon beingresearched, according to the degree of vari-ance in its states.

This means that it is possible to find caseswhich on their own can represent a significantfeature of a phenomenon. Generalizabilitythus conceived concerns more general struc-tures and is detached from individual socialpractices, of which they are only an instance.In other words, the scholar does not generalizethe individual case or event, which as Weberstressed is unrepeatable, but the key structuralfeatures of which it is made up, and which areto be found in other cases or events belongingto the same species or class. As Becker hasrecently pointed out:

in every city there is a body of social practices —forms of marriage, or work, or habitation — whichdon’t change much, even though the people whoperform them are continually replaced throughthe ordinary demographic process of birth, death,immigration, and emigration. (2000: 6)

On this view, the question of generalizabilityassumes a different significance: for examplein the conclusions to his study on therelationship between a psychotherapist anda patient suffering from AIDS, Peräkyläwrites:

The results were not generalizable as descriptionsof what other counselors or other professionalsdo with their clients; but they were generalizableas descriptions of what any counselor or otherprofessional, with his or her clients, can do, giventhat he or she has the same array of interactionalcompetencies as the participants of the AIDScounseling session have. (1997: 216, quoted inSilverman, 2000: 109)

Page 15: g: pp ex ookskeyword4997-alasuutari4997-alasuutari-ch12 · 191; Alasuutari, 1995: 196–7), ‘moderatum generalization’ (Williams, 2000; Payne and Williams, 2005), and others.

[13:00 6/11/2007 4997-alasuutari-ch12.tex] Paper: a4 Job No: 4997 Alasuutari: Social Research Methods (SAGE Handbook) Page: 207 193–213

RE-CONCEPTUALIZING GENERALIZATION: OLD ISSUES IN A NEW FRAME 207

Something similar happens in film andradio productions with noise sampling.The squeak of the door (which gives us theshivers when we watch a thriller or a horrorfilm) does not represent all squeaks of doors,but we associate it with them. We do not thinkabout the differences between that squeak andthe one made by our front door; we notice thesimilarities only. These are two different waysof thinking, and most social sciences seek tofind patterns of this kind.

While the verbal expressions of an inter-active exchange may vary, exchange basedon the question-answer pattern features a for-mal trans-institutional (though not universal)structure. While laying a page of a newspaperon the floor and declaring one’s sovereigntyover it (Goffman, 1961) is a behavior observedin one psychiatric clinic only, the need to havea private space and control over a territory hasbeen reported many times, albeit in differentforms.

INTERACTIVE, PROGRESSIVE, ANDITERATIVE SAMPLING: SOME TIPS

Having outlined the theoretical premises ofan idiographic sampling theory, I shall nowdescribe its procedural aspects. However,there is no precise logical itinerary to set out,because methodological principles and rulesdo not have to stand on their own – as they areinstead required to do in statistical samplingtheory – in that they have only a weakrelation to practice. It is instead necessaryto approach the entire question of samplingsequentially, and it would be misleading toplan the whole strategy beforehand. In order toachieve representativeness, the sampling planmust be set in dialogue with field incidents,contingencies, and discoveries. This is whatI mean by ‘interactive, progressive, anditerative sampling.’ An excellent instance ofthis procedure ‘is given in Glaser and Strauss’s(1964, 1968) studies on dying in the hospital,where hypotheses were developed hand inhand with data collection’ (Denzin, 1971:269). Another example of changing or addingto the sampling plan on the basis of something

the researcher has learnt in the field is providedby Becker:

Blanche Geer and I were studying college students.At a certain point, we became interested instudent ‘leaders,’ students who were heads ofmajor organizations at the university (there wereseveral hundred of them). We wanted to knowhow they became leaders and how they exercisedtheir powers. So we made a list of the majororganizations (which we could do because wehad been there for a year and knew what thosewere, which we would not have known when webegan) and interviewed twenty each of men andwomen student leaders. And got a great result —it turned out that the men got their positionsthrough enterprise and hustling, while the womenwere typically appointed by someone from theuniversity! (Howard Becker, 13/7/2002, personalcommunication)

Consistency must be given to the samplingreasoning, but not by mere application ofprocedural steps. The reasoning could be asfollows.

1 The researcher usually starts from his/her researchquestions. Melvin Dalton’s were:

Why did grievers and managers form cross-cliques?Why were staff personnel ambivalent toward lineofficers? Why was there disruptive conflict betweenMaintenance and Operation? If people whereawarded posts because of specific fitness, whythe disparity between their given and exercisedinfluence? Why among executives on the sameformal level, were some distressed and some not?And why were there such sharp differences inviewpoint and moral concern about given events?What was the meaning of double talk about successas dependent on knowing people rather than onpossessing administrative skills? Why and howwere ‘control’ staffs and official guardians variouslycompromised? What was behind the contradictorypolicy and practices associated with the use ofcompany materials and services? Thus the guidingquestion embracing all others was: what ordersthe schism and ties between official and unofficialaction? (1959: 274)

Research questions comprise the concepts andcategories (behaviors, attitudes, and so on) thatthe researcher intends to study.

2 The researcher conducts primary (or ‘provisional’and ‘open’7: Strauss and Corbin, 1990: 193)sampling in order to collect cases in accordance

Page 16: g: pp ex ookskeyword4997-alasuutari4997-alasuutari-ch12 · 191; Alasuutari, 1995: 196–7), ‘moderatum generalization’ (Williams, 2000; Payne and Williams, 2005), and others.

[13:00 6/11/2007 4997-alasuutari-ch12.tex] Paper: a4 Job No: 4997 Alasuutari: Social Research Methods (SAGE Handbook) Page: 208 193–213

208 THE SAGE HANDBOOK OF SOCIAL RESEARCH METHODS

with the concepts. As Payne and Williams(2005: 295) suggest, ‘research design should planfor anticipated generalizations, and that general-ization should be more explicitly formulated withina context of supporting evidence.’

3 Because not every concept can be directly studied,when the researcher constructs the provisionalsample, s/he considers the following aspects:

a specificity (focusing on specific social activi-ties with distinctive features, like rituals orceremonies);

b the field’s degree of openness (open or closedplaces);

c intrusiveness (the endeavor to reduce theresearcher’s visibility);

d institutional accessibility (free-entry versuslimited-entry situations within the organiza-tion);

e significance (frequent and high organizationalsignificance of social activities).

4 It is advisable to sample type of actions orevents: ‘not, then, men and their moments. Rathermoments and their men’ (Goffman, 1967: 3), ‘notonly people but moments of lived life’ (Converseand Schuman, 1974: 1), ‘incidents and notpersons per se!’ (Strauss and Corbin, 1990: 177),in contrast with the common practice of samplingbodies, and of seeking information from thesebodies about behaviors and events that are neverobserved directly (Cicourel, 1996). There are tworeasons for this important recommendation: first,it serves to prevent the survey sampling mistakeconcerning the transferability of ideas aboutrepresentativeness; second, the same person maybe engaged in overlapping activities. For example,Dalton (1959), when studying power struggles incompanies, found five ‘types of cliques:’ vertical(symbiotic and parasitic), horizontal (defensive andaggressive), and random. If we sample individuals,we find that they belong to more than oneclique according to the situation, intention, and soon. If we consider activities, everything becomessimpler.

5 To date, four main types of sampling have beendeveloped in social research: purposive, quota,emblematic, and snowball. When cases areselected, attention should be paid to the varianceof concept, so that different voices or cases canbe included in the sample.

6 As the research proceeds, the researcher will refinehis/her ideas, categories and concepts, or comeup with new ones. The important thing is to

make connections among them, thus formulatingworking hypotheses. Even though not everyhypothesis is testable (indeed the most interestingones often are not), if the reader is to be persuaded,they must be formulated in a testable way.

7 When the researcher has formulated hypotheses,s/he restarts sampling in order to collect casessystematically relating to each hypothesis, andseeking to make his/her analysis consistent.Strauss and Corbin call this second sampling‘relational and variational: is associated withaxial coding. It aims to maximize the finding ofdifferences at the dimensional level’ (1990: 176).They depict the research process as funnel-shaped:through three increasingly focused steps (open,axial, and selective) the researcher clarifies his/herstatements because ‘consistency here meansgathering data systematically on each category’(Strauss and Corbin, 1990: 178). When theresearcher finds an interesting aspect, she/he mustalways check whether it occurs in other samples.

8 Generalization must be ensured ‘across and withincases (…) [because] the danger of error in drawinggeneral conclusions from a small number of casesmust not be underestimated’ (Gomm, Hammersleyand Foster, 2000: 98). This concept has beensometimes rubricated as ‘internal generalization,’and it implies different strategies which takeaccount of diverse dimensions: time, sites, days,and people. The researcher should collect casesof behavior recurring at different moments oftime. Because the researcher cannot observe thecase-study population twenty-four hours a day,s/he must take a decision on when and wheres/he will observe the population (Schatzman andStrauss, 1973: 39–41; Corsaro, 1985: 28–32).Unfortunately,

case study researchers rarely make clear whatthey take to be the temporal boundaries ofthe cases they have studied (…) it is notunusual for case studies of schools to focus onone year-group or cohort of students and toassume that the experience of these students isrepresentative of other cohorts, past and future.(Gomm, Hammersley and Foster, 2000: 109)

Social practices always occur in certain places andat certain times of the day. Only if the researcherknows all the rituals of the organization observedcan s/he draw a representative sample.

A classic illustration is provided by Berlak et al.’sstudy of progressive primary school practice in

Page 17: g: pp ex ookskeyword4997-alasuutari4997-alasuutari-ch12 · 191; Alasuutari, 1995: 196–7), ‘moderatum generalization’ (Williams, 2000; Payne and Williams, 2005), and others.

[13:00 6/11/2007 4997-alasuutari-ch12.tex] Paper: a4 Job No: 4997 Alasuutari: Social Research Methods (SAGE Handbook) Page: 209 193–213

RE-CONCEPTUALIZING GENERALIZATION: OLD ISSUES IN A NEW FRAME 209

Britain in the 1970s (Berlak and Berlak, 1981;Berlak et al., 1975). They argued that previousAmerican accounts had been inaccurate becauseobservation had been brief and had tended to takeplace in the middle of the week, not on Mondayor Friday. On the basis of these observations,the inference had been drawn that in progressiveclassrooms children simply chose what they wantedto do and got on with it. As Berlak et al. document,however, what typically happened was that theteachers set out the week’s work on Mondays,and on Fridays they checked that it had beencompleted satisfactorily. Thus, earlier studies werebased on false temporal generalizations withincases they investigated. (Gomm, Hammersley andFoster, 2000: 109–110)

Qualitative researchers do not seek to know thedistribution of such behaviors (how many times);they only seek to know whether they are recurrentand significant in the organization under study. Inaddition, ‘our concern is with representativenessof concepts’ (Strauss and Corbin, 1990: 190). Andfinally, in regard to people and sites,

there is also likely to be variation in the behaviorof both teachers and pupils across differentcontexts within a school. While most contactbetween members of the two groups probablyoccur in classrooms, they also meet one anotherin other places as well: in assembly halls, diningrooms, corridors, on game fields, and so on(…) Teacher-pupil relationships are likely to varyacross mathematics classrooms, drama studiosand science laboratories, for example. (Gomm,Hammersley and Foster, 2000: 111)

9 The researcher can sample new incidents ors/he can review incidents already collected:‘Theoretical sampling is cumulative. This isbecause concepts and their relationships alsoaccumulate through the interplay of datacollection and analysis […] until theoreticalsaturation of each category is reached’ (Straussand Corbin, 1990: 178, 188).

10 This interplay between sampling and hypothesistesting is needed because

a representative samples are not predictedin advance but found, constructed, anddiscovered gradually in the field;

b it reflects the researcher’s experience, previousstudies, and the literature on the topic. In otherwords, the researcher will come to know the

variance of a phenomenon cumulatively, studyby study. As Gomm, Hammersley and Foster(2000: 107) acknowledge:

it is possible for subsequent investigationsto build on earlier ones by providingadditional cases, so as to construct asample over time that would allow effectivegeneralization. At the present, this kind ofcumulation is unusual (…) the cases arenot usually selected in such a way as tocomplement previous work;

c representative samples are used to justify theresearcher’s statements.

It is therefore apparent that, although on theone hand ‘generalization is not an issue thatcan be dismissed as irrelevant by case studyresearchers’(Gomm, Hammersley and Foster,2000: 111), on the other it is not the impossibleundertaking that survey researchers havealways mocked. Finally, whilst probabilitysampling has a substantive aim – to constructa sample in order to extend the findings to thepopulation – interactive sampling has a furthertask: to reflect, through its recursiveness, onthe plausibility of generalizations.

CONCLUSION

Statistical inference (survey) and theoreticalinference (experiment), as the two legitimateways to draw general conclusions, continueto be used even though their application isfraught with difficulties; and they in factend up by deviating from their theoreticalprinciples and assumptions. Hence one fails tounderstand why it is not possible to resort toother forms of generalization which, thoughunsatisfactory, are no more unsatisfactorythat those deemed superior to them. For thatmatter, contemporary social scientists do nothave to choose between perfect and imperfectforms of generalization, but between formsof inference whose strengths and weaknessesdepend on the researcher’s cognitive aims,the research situation, and the nature of thephenomenon under study.

The central idea of this essay liesmidway between two highly authoritative

Page 18: g: pp ex ookskeyword4997-alasuutari4997-alasuutari-ch12 · 191; Alasuutari, 1995: 196–7), ‘moderatum generalization’ (Williams, 2000; Payne and Williams, 2005), and others.

[13:00 6/11/2007 4997-alasuutari-ch12.tex] Paper: a4 Job No: 4997 Alasuutari: Social Research Methods (SAGE Handbook) Page: 210 193–213

210 THE SAGE HANDBOOK OF SOCIAL RESEARCH METHODS

and well-known methodological proposals:Durkheim’s (1912) cas pur (the ‘pure case’),with positivist overtones, and Max Weber’s(1904) theory of ideal types. Durkheimbelieved that the simplest society of all forstudy of the elementary forms of religiouslife was the Australian tribe of the Arunta.The Flemish statistician and sociologistAdolphe Quételet (1796–1874) looked to thecrowd for his homme moyen (the averageman), who represented the ‘normality’ of thespecies. He was prompted to do so by thediscovery that certain characteristics (physicaland biological) of individuals were distributedin the populations which he studied accordingto the ‘normal’ curve constructed by themathematician Gauss.

Conversely, Weber maintained that ‘feu-dal society,’ ‘bureaucracy,’ ‘charisma’ weregenetic concepts (developed with a view toa causal explanation) and limiting concepts.They consequently could not be evaluatedin terms of their reality-describing adequacy,only in terms of their instrumental efficacy.For Weber (1904), an ideal type was nota representation of the real; rather, it wasformed by a one-sided accentuation of one ormore points of view and by the connection ofa quantity of diffuse, discrete, more or lesspresent and occasionally absent, particularphenomena. Given the conceptual purity ofan ideal type, it could never be empiricallydetected in reality; it was a utopian entity.

The typical or emblematic case suggestedas a criterion for the construction of samplestands midway between the claim to havediscovered the pure case (the quintessenceof the phenomenon studied) and renunciationof the empirical search for cases of interestbecause of their typicality.

At the end of the 1980s, in a study onthe interview, I documented the rituals andrhetorical strategies used by an intervieweras he made telephone calls to 10 adolescentsin order to arrange subsequent face-to-faceinterviews (Gobo, 1990, 2001). The researchinvolved the recording of the telephone callsand subsequent discourse analysis. Someyears later, Maynard and Schaeffer (1999)conducted very similar research in the United

States. Comparison between the results of thetwo research studies showed that the threeresearchers had discovered almost identicalpatterns of behavior. The reason for this simi-larity was probably that the survey interview-ers had been trained with textbooks widelyused on both sides of theAtlantic, and that theyhad used artifacts – technological (telephone,keyboard), cognitive (questionnaires), andorganizational (scripts or interview formats) –which made the social activities very similar.There are consequently numerous socialresearch settings in which a few cases maysuffice to make a generalization. Providedthey are chosen carefully.

NOTES

1 To be stressed is that the distinction betweenprobability and non-probability does not markthe boundary between qualitative and quantitativeresearch: in fact, non-probability samples are also usedfor surveys (quota, telephone, and so on) and forexperiments.

2 This compromise centered on the idea ofcomplementarity is still accepted by numerousmethodologists: see for instance Payne and Williams(2005: 297).

3 Indeed, there are some who maintain thatgeneralizability is perhaps the wrong word for whatqualitative researchers seek to achieve: ‘Generaliza-tion is (…) [a] word (…) that should be reserved forsurveys only’ (Alasuutari, 1995: 156–7).

4 However, Denzin’s (1971) position was verydifferent at the end of the 1960s: he expressed himselfin favor of operationalization (‘this does not mean thatoperationalization is avoided – it merely suggests thatthe point of operazionalization is delayed until thesituated meaning of concepts is discovered,’ p. 268);he believed that the use of indicators was important(‘a series of empirical indicators relevant to each database and hypothesis must be constructed, and, last,research must progress in a formative manner in whichhypotheses and data continually interrelate,’ p. 269),and he argued that ‘it is necessary for researchers todemonstrate the representativeness of those units inthe total population of similar events’ (p. 269).

5 Gomm et al. (2000: 112, endnote 2) acutely pointout: ‘there is some ambiguity in Stake’s position. Healso recognizes that case studies can be instrumentalrather than intrinsic, and in an outline of the ‘majorconceptual responsibilities’ of case study inquiryhe lists the final one as ‘developing assertions orgeneralizations about the case (Stake, 1994, 244).’

Page 19: g: pp ex ookskeyword4997-alasuutari4997-alasuutari-ch12 · 191; Alasuutari, 1995: 196–7), ‘moderatum generalization’ (Williams, 2000; Payne and Williams, 2005), and others.

[13:00 6/11/2007 4997-alasuutari-ch12.tex] Paper: a4 Job No: 4997 Alasuutari: Social Research Methods (SAGE Handbook) Page: 211 193–213

RE-CONCEPTUALIZING GENERALIZATION: OLD ISSUES IN A NEW FRAME 211

6 For this reason, apparently too severe andwithout empirical justification is Payne and Williams’statement that: ‘opportunistic site selection willnormally be incompatible with even moderatumgeneralization’ (2005: 310).

7 As Strauss and Corbin (1990: 176) explain:‘open sampling is associated with open coding.Openness rather than specificity guides the samplingchoices.’ Open sampling can be performed purpo-sively (e.g. pp. 183–4) or systematically (e.g. p. 184),or it occurs fortuitously (e.g. pp. 182–3). It includeson-site sampling.

REFERENCES

Alasuutari, Pertti 1995 Researching Culture, London:Sage.

Alberoni, Francesco et al. 1967 L’attivista di partito,Bologna: Il Mulino.

Bailey, Kenneth D. 1978 Methods in Social Research,New York: Free Press.

Barton Allen H. and Lazarsfeld Paul F. 1955 Somefunctions of qualitative analysis in social research,Frankfurter Beitrage zu Sociologie, 1: 321–361.

Becker, Howard. 1953 Becoming a Marijuana Smoker.American Journal of Sociology, 59: 235–242.

Becker, Howard 1998 Trick of the Trade, Chicago andLondon: University of Chicago Press.

Becker, Howard 2000 Italo Calvino as Urbanologist,paper.

Bourdieu, Pierre. et al. 1993 La Misere du monde, Paris:Editions du Seuil, transl. The Weight of the World:Social Suffering in Contemporary Society, Cambridge:Polity, 1999.

Burgess, Ernest W. 1927 Statistics and case studiesas methods of sociological research, Sociology andSocial Research, 12: 103–120.

Burrell, Gibsen and Morgan, Gareth 1979 SociologicalParadigms and Organizational Analysis, London:Heinemann.

Capecchi, Vittorio 1972 Struttura e tecniche della ricerca,in Pietro Rossi (ed.), Ricerca sociologica e ruolo delsociologo, Bologna: Il Mulino.

Chain, Isidor 1963 An introduction to sampling, inC. Selltiz and M. Jahoda (eds.), Research Methodsin Social Relations, New York: Holt & Rinehart,pp. 509–45.

Cicourel, Aaron V. 1968 The Social Organization ofJuvenile Justice, New York: Wiley.

Cicourel, Aaron V. 1996 Ecological Validity andWhite Room Effects, Pragmatic and Cognition, 4(2):221–263.

Cicourel, Aaron V. and Boese, R. 1972 Sign languageacquisition and the teaching of deaf children, in

D. Hymes et al. (eds.), Functions of Language in theClassroom, New York: Teacher College Press.

Connolly, Paul 1998 ‘Dancing to the wrong tune’:Ethnography, generalization, and research on racismin schools, in P. Connolly and B. Troyna (eds.),Researching Racism in Education, Buckingham: OpenUniversity Press, pp. 122–39.

Converse, Jean M. and Schuman, Howard 1974Conversations at Random: Survey Research asInterviewers See it, New York: Wiley.

Corsaro, William A. 1985 Friendship and Peer Culturein the Early Years, Norwood, N.J: Ablex PublishingCorporation.

Crapanzano, Vincent 1980 Tuhami. Portrait of a Moroc-can, Chicago: University of Chicago Press.

Cronbach, Lee L. 1975 Beyond the two disciplinesof scientific psychology, American Psychologist,30: 116–27.

Cronbach, L. 1982 Designing Evaluations of Educationaland Social Programs, San Francisco: Jossey-Bass.

Dalton, Melvin 1959 Man Who Manage, New York:Wiley.

De Martino, Ernesto 1961 La terra del rimorso, Milano:Il Saggiatore, transl. The Land of Remorse: A Study ofSouthern Italian Tarantism, London: Free AssociationBooks, 2005.

Denzin, Norman K. 1971 Symbolic interactionism andethomethodology, in J.D. Douglas (ed.), Understand-ing Everyday Life, London: Routledge and Kegan Paul,pp. 259–284.

Denzin, Norman K. 1983 Interpretive interactionism, inG. Morgan (ed.), Beyond Method: Strategy for SocialResearch, Beverly Hills, CA: Sage, pp. 129–46.

Durkheim, Emile 1912 Les formes élémentaires de la viereligieuse, Paris: Alcan, transl. The Elementary Formsof the Religious Life, London: G. Allen & Unwin, 1915.

Festingers, Leon, Riecken, Henry W. and Schachter,Sanley 1956 When Prophecy Fails, New York: HarperTorchbooks.

Fielding, Nigel 1981 The National Front, London:Routledge.

Galtung, John 1967 Theory and Methods of SocialResearch, Oslo: Universitets Forlaget.

Garfinkel, Harold 1967 Studies in Ethnometodology,Englewood Cliffs, NJ: Prentice Hall.

Geertz, Clifford 1972 Deep play: notes on the BalineseCockfight, Dedalus, 101: 1–37.

Geertz, Clifford 1973 The Interpretation of Culture,New York: Basic Books.

Glaser, Barney G. and Strauss, Anselm L. 1967The Discovery of Grounded Theory, Chicago: Aldine.

Gobo, Giampietro 1990 The First Call: Rituals andRhetorical Strategies in the First Telephone Call withItalian Respondents, paper, Annual Meeting of theA.S.A., Washington D.C. August, 11–15.

Page 20: g: pp ex ookskeyword4997-alasuutari4997-alasuutari-ch12 · 191; Alasuutari, 1995: 196–7), ‘moderatum generalization’ (Williams, 2000; Payne and Williams, 2005), and others.

[13:00 6/11/2007 4997-alasuutari-ch12.tex] Paper: a4 Job No: 4997 Alasuutari: Social Research Methods (SAGE Handbook) Page: 212 193–213

212 THE SAGE HANDBOOK OF SOCIAL RESEARCH METHODS

Gobo, Giampietro 2001 Best practices: rituals andrhetorical strategies in the ‘initial telephonecontact,’ Forum Qualitative Social Research, 2(1),http://www.qualitative-research.net/fqs-texte/1-01/1-01gobo-e.htm.

Gobo, Giampietro 2004 Sampling, representative-ness and generalizability, in Seale C., Gobo G.,Gubrium J.F., Silverman D. (eds.), QualitativeResearch Practice, London: Sage, pp. 435–56.

Goetz, J.P. and LeCompte, Margaret D. 1984 Ethnog-raphy and Qualitative Design in Education Research,Orlando, FL, Academic Press.

Goffman, Erving 1961 Asylums, New York: Doubleday.Goffman, Erving 1967 Interaction Ritual, New York:

Doubleday Anchor.Goldthorpe, John H., Lockwood, David, Bechhofer,

Frank and Platt, Jennifer 1968 The Affluent Worker:Industrial Attitudes and Behaviour, Cambridge:Cambridge University Press.

Gomm, Roger, Hammersley, Martyn and Foster, Peter(eds.) (2000) Case Study Method, London: Sage.

Goode, William and Hatt, Paul, K. 1952 Methods inSocial Research, New York: McGraw-Hill.

Gouldner, Alvin G. 1954 Patterns of IndustrialBureaucracy, New York: The Free Press.

Griaule, Marcel 1948 Dieu d’eau: entretiens avecOgotemmêli, Paris: Éditions du Chêne.

Groves, Robert M. and Lyberg, Lars E. 1988 Anoverview of nonresponse issues in telephone surveys,in R.M. Groves, P.P. Biemer, L.E. Lyberg, J.T. Massey,W.L. Nicholls II and J. Waksberg (eds.), TelephoneSurvey Methodology, New York: Wiley.

Guba, Egon G. 1981 Criteria for assessing thetrustworthiness of naturalistic enquiries, EducationalCommunication and Technology Journal, 2(29):75–92.

Guba, Egon G. and Lincoln, Yvonna S. 1981 EffectiveEvaluation: Improving the Usefulness of Evalua-tion Results Through Responsive and NaturalisticApproaches, San Francisco: Jossey-Bass.

Guba, Egon G. and Lincoln, Yvonna S. 1982 Episte-mological and methodological bases of naturalisticinquiry, Educational Communication and TechnologyJournal, 30: 233–252.

Hammersley, Martyn 1992 What’s Wrong with Ethnog-raphy?, London: Routledge.

Hebdige, Dick 1979 Subculture: The Meaning of Style,London and New York: Routledge.

Kahneman, D. and Tversky, A. 1972 Subjective prob-ability: A judgment of representativeness, CognitivePsychology, 3: 430–454.

Lieberson, Stanley 1992 Small N’s and Big Conclusions:An examination of the Reasoning in ComparativeStudies Based on Small Number of Cases, reprinted

in R. Gomm, Hammersley, M. and Foster P. (eds.)(2000), op. cit.

Lincoln, Yvonna, S. and Guba, Egon, G. 1979 NaturalistInquiry, Beverly Hills, CA: Sage. (Reprinted partially inGomm Roger, Hammersley, Martyn and Foster, Peter(eds.) 2000 Case Study Method, London: Sage,pp. 27–42.

Mason, Jennifer 1996 Qualitative Researching, NewburyPark: Sage.

Maynard, Douglas W. and Schaeffer, Nora Cate 1999Keeping the gate, Sociological Methods & Research,1: 34–79.

Mitchell, Clyde J. 1983 Case and situation analysis,Sociological Review, 31: 187–211.

Mitchell, R. and Karttunen, S. 1991 Perché e comedefinire un artista?, Rassegna Italiana di Sociologia,XXXII(3): 349–64.

Pawson Ray and Tilley Nick 1997 Realistic Evaluation,Sage: London.

Payne, Geoff and Williams, Malcolm 2005 Gener-alization in qualitative research, Sociology, 39(2):295–314.

Peräkylä, Anssi 1997 Reliability and validity in researchbased upon transcripts, in David Silverman (ed.),Qualitative Research, London: Sage, pp. 201–19.

Pollner, Melvin and McDonald, Wikler Lynn 1985 Thesocial construction of unreality: a case study ofa family’s attribution of competence to a severelyretarded child, Family Process, 24: 241–254.

Ragin Charles C. and Becker Howard S. (eds.) 1992 Whatis a Case? Cambridge: Cambridge University Press.

Rosenhan, David L. 1973 On being sane in insane places,Science, 179: 250–8.

Rositi, Franco 1993 Strutture di senso e strutture di dati,Rassegna Italiana di Sociologia, 2: 177–200.

Sacks, Harvey 1992 Lectures on Conversation, Oxford:Blackwell.

Schatzman, Leonard and Strauss, Anselm L. 1973 FieldResearch, Englewood Cliffs, NJ: Prentice Hall.

Schofield Janet Ward 1990 Increasing the generaliz-ability of qualitative research, in E.W. Eisner andA. Peshkin (eds.), Qualitative Inquiry in Education:The Continuing Debate, New York: Teachers CollegePress, pp. 201–232.

Silverman, David 2000 Doing Qualitative Research,London: Sage.

Stake, Robert 1978 The case study method in socialenquiry, Educational Researcher, 7: 5–8 (Reprintedin Gomm Roger, Hammersley, Martyn and Foster,Peter (eds.) 2000 Case Study Method, London: Sage,pp. 19–26).

Strauss, Anselm 1987 Qualitative Analysis forSocial Scientists, Cambridge: Cambridge UniversityPress.

Page 21: g: pp ex ookskeyword4997-alasuutari4997-alasuutari-ch12 · 191; Alasuutari, 1995: 196–7), ‘moderatum generalization’ (Williams, 2000; Payne and Williams, 2005), and others.

[13:00 6/11/2007 4997-alasuutari-ch12.tex] Paper: a4 Job No: 4997 Alasuutari: Social Research Methods (SAGE Handbook) Page: 213 193–213

RE-CONCEPTUALIZING GENERALIZATION: OLD ISSUES IN A NEW FRAME 213

Strauss, Anselm and Corbin, Julet 1990 Basics ofQualitative Research, London: Sage.

Tversky, Amos and Kahneman, Daniel 1974 Judgmentunder uncertainty: Heuristics and biases, Science,185: 1123–1131.

Weber, Max 1904 Die ‘Objektivität’ sozialwis-senschaftlicher und sozialpolitischer Erkenntnis,Archiv für sozialwissenschaf und Sozialpolitik, XIX:22–87, transl. On the methodology of the socialsciences, Illinois: The Free Press of Glencoe, 1949.

Williams, Malcolm 2000 Interpretativism and general-ization, Sociology, 34(2): 209–24.

Xing Xu, Zhonghe Zhou, Xiaolin Wang, Xuewen Kuang,Fucheng Zhang and Xiangke Du 2003 Four wingeddinosaurs from China, Nature, 421: 335–339.

Yin, Robert K. 1984 Case Study Research, ThousandOaks: Sage.

Znaniecki , Florian 1934 The Method of Sociology, NewYork: Farrar & Rinehart.