INDSCALforMCorrespAnlyss
-
Upload
krishnan-iyer -
Category
Documents
-
view
214 -
download
1
description
Transcript of INDSCALforMCorrespAnlyss
-
J. DOUGLAS CARROLL and PAUL E. GREEN*
Current methods of multiple correspondence analysis (MCA) provide configura-tions that are expressed in terms of principal axes. These solutions are not invariantover rotations. The authors propose an approach to MCA that entails an INDSCALanalysis of normalized Burt matrices (as commonly obtained from MCA). The re-sulting configuration is uniquely oriented and dimension weights also are obtainedfor each contributory data set. The method is applied to survey data describingrelationships among respondent demographic characteristics and recent car pur-
chases.
An INDSCAL-Based Approach to MultipleCorrespondence Analysis
Though correspondence analysis (CA) has been avail-able for many years (Benz6cri 1969) as a technique forexploratory data analysis, with few exceptions (e.g..Green, Rao. and DeSarbo 1978; Holbrook, Moore, andWiner 1982), U.S. marketing researchers have not re-ported many applications of its use. The situation maywell change in the future, given the availability of recentpublications (Gifi 1981; Greenacre 1984; Greenacre andHastie 1987; Heiser 1981; Jambu and Lebeaux 1983; Le-bart, Morineau, and Warwick 1984; Meulman 1982;Nishisato 1980) and computer programs (e.g., Nishisato1980; Lebart and Morineau 1982; Nishisato and Nishisa-to 1983) for implementing correspondence analysis.
Hoffman and Franke's (1986) review article, in par-ticular, should spur interest in the approach. Though em-phasizing the more common two-way case, they alsodiscuss a multiple correspondence analysis (MCA) ex-ample involving 34 students' purchase/consumptionpattems for eight soft drinks. The authors use the Frenchmethod of "d^doublement" to analyze the data, so thateach soft drink point appears twice (picked vs. not picked)in the configuration.
Hoffman and Franke also discuss methods for inter-preting the configurations obtained from CA. However,because CA configurations typically are oriented in termsof principal axes, the configuration can be rotated freely
*J. Douglas Carroll is Distinguished Member of Technical Staff,AT&T Bell Laboratories. Paul E. Green is S. S. Kresge Professor ofMarketing, The Wharton School, University of Pennsylvania.
without affecting the interpoint distances of (say) thecolumn points. Carroll, Green, and Schaffer (1986, 1987)describe how distance comparisons (Carroll 1968; Sa-porta 1980) can be made between row and column points.Still, their approach also leads to a principal-axes ori-entation.
We propose an altemative approach involving three-way or higher-way categorical data, in which an IND-SCAL-like analysis (Carroll and Chang 1970) is appliedto a set of matrices obtained from the dummy-variablerepresentation of N objects' classifications by threemore categorical variables. We first review the essentiaof conventional MCA in the context of a smalldata set, then describe the MCA/INDSCAL prand apply it to a large data set involvingmographic characteristics and their relationshicar purchase characteristics.
MULTIFLE CORRESFONDENC/
The usual MCA analysis of cat^a multiway contingency table ( ^ ' 'input data are assumed to be ''by categorical variables matri-Coke fFor example, consider Ta(4)?dents' coded responsee, do you
.oo(i),$2.r1. What soft drink
Dr. Pepper ^snacks do yj2. How nnuch/
on soft drand ov
3. Whi'
193
-
194 JOURNAL OF AAARKETING RESEARCH, AAAY 1988
Table 1ORIGINAL RESPONSE PAHERN AND (TRANSPOSED) INDICATOR AAATRIX A
Originat
Softdrink
2411324143232443423121224
Aresponse
$1322213232121332312233333
pattern
Snack5412354142535442553145341
Columnsums
7 0
011000100000000000101000
5
Soft21000010000101000010010110
8
drinks30000100001010001001000000
5
Corresponding
40100001010000110100000001
7
11000010000101000010000000
5
B(transposed) indicator
$20011100101010001001100000
9
30100001010000110100011111
11
10010000100000000000100001
4
variabte
20001000001000001000000000
3
matrix ASnacks
30000100000010000001000100
4
40100001010000110000010010
7
51000010000101000110001000
7
Rowsums
3333333333333333333333333
75
drinkspretzels (1), peanuts (2), M&M's (3), Fritos (4),or dried fruits (5)?
Correspondence Analysis Method data of Table lA can be portrayed in a
^ngency table. For expository purposes,
-
MULTIPLE CORRESPONDENCE ANALYSIS 195
Table 2CROSS-PRODUCT AAATRIX:
Coke7-UpDr. PepperNehi
Less than $2.00$2.00-3.99$4.00 and morePretzelsPeanutsM & M'sFritosDried fruits
Soft drinks1
5000
041
31001
20800
03
00125
30050
050
02300
40007
007
10051
/0500
L/l
00
00005
$24050
090
33300
G =
31307
00
11
10172
A'
/3001
031
40000
A
Snacks21020
030
03000
30130
031
00400
4020
U\
007
00070
51
U\
01
502
00007
(1) n=\fj ' = 1.2 /
be the row marginals of F and
(2)
be the column marginals. Next, define(3) R = diag(r,)(4) C = diag(c;)as / X / and J x J diagonal matrices with the row andcolumn marginals of F defining their respective diago-nals.
Two-way correspondence analysis, in t dimensions,consists of first defining the rescaled matrix(5) F* = R"' '^FC"' ' 'and then finding the best least squares (/ + l)-dimen-sional approximation to F*:(6) , F,%,, = X(*^(,%)'where X(*+,) and Y(*+,) are / x (f + 1) and 7 x (r + 1)matrices, respectively, chosen so that Fjf+,) is as closeas possible to F* in a least squares sense. Then, X, andY, defined as
(7)(8)
X, =
Y, =
are coordinates of the levels of attribute 1 (the row at-tribute) and attribute 2 (the column attribute), respec-tively, in a CA representation. (The definition of X,**and Y** follows.)
One way to determine X(*+|) and Y(5+,) is to define thesingular value decomposition (SVD) of
where U and V are each columnwise orthonormal ma-trices whose columns are singular vectors (left and right,respectively) of F*; P is a diagonal matrix containingthe ordered singular values of F* (Green with Carroll1976). The singular values can be so defined as to beall non-negative, with the diagonal elements defined sothat 3,1 is the largest, 2^2 the second largest, and so on.
We now define XJ+D and YJ^+D as
(10) X(*+,) = U(,+ ,)P(/+i)(11) Y(*+,) = V(,+,)where U(,+ i) comprises the first f + 1 rows of U (thuscontaining the first t + 1 left singular vectors of F*,ordered by their singular values).^ ^H+D consists of thefirst r + 1 rows of V and p(,+,) is the (t + 1) x (t -\- 1)diagonal matrix whose diagonals are the first r + 1 sin-gular values.
X,** and Yf* are defined by dropping the first columnfrom X(t+,) and Y(=?+,,, respectively, because these col-umns are "trivial" column vectors whose entries, whenrescaled per equations 7 and 8, are all the same constant.
Application ofCA to the A' MatrixAs stated before, MCA amounts simply to application
of CA to the pseudocontingency table, to F = A', whichis usually called the "indicator matrix." We might raisethe question of why this leads us to consider G = A'A.To see the reason, we first must go back to the SVD ofF* and consider one computational procedure for ob-taining it. This procedure entails first defining the matrix
(12) Q = F*(F*)'and finding the SVD of this symmetric (positive definiteor semidefinite) matrix, Q. It turns out that the singularvalues of Q (which, because Q is symmetric, are Q'seigenvalues) are the squares of the singular values of F*,whereas the singular vectors (eigenvectors) of Q are thesame as the left singular vectors of F*; that is
(13) Q =We can use this fact to compute the left singular vectorsand the singular values of F*. (Should we need them,the right singular vectors of F* are given by V =P"'U'F*).
We further consider the definition of Q. We have(14) Q = F*(F*)'but recall that F* = R~"^FC"'^^ thus we can write(15) Q = R-'/'FC"'F'R"''".
(9) F* =
^The diagonal matrix of singular values obviously could have beenabsorbed in either X and Y, or split in some way between the two.The way we chose to define X and Y absorbs P, entirely in X,. Thisleads to an interpretation of X, in terms of accounting for what Benzdcri(1969) calls the "chi square metric," defined between rows of F; Y,is defined in terms of the "barycentric" principle (Carroll, Green, andSchaffer 1986).
-
196 JOURNAL OF AAARKETING RESEARCH, MAY 1988
In the problem of interest, where we substituted (for F)the pseudocontingency table F = A', we now have
(16) = Rf' f' IHowever, Cp, where F" = A', is just the diagonal matrixof the column marginals of A', or the row marginals ofA, which we have seen to be constant (and equal to thenumber, F, of attributes). Thus, Cp, and therefore itsinverse Cf', is a scalar matrix (i.e., a constant times anidentity). can be denoted here as P" ' (= '/^ I). Inthis case we have(17)
P ' is just an overall constant multiplier (which does notmatter for present purposes) because the number of at-tributes can be viewed as fixed and the diagonal entriesof P~' each contain the reciprocal of that number.(Throughout this article, we ignore overall scale factors,such as the multiplier '//., that do not affect the relativescaling of these solutions.) We also note that(18) = diag(G)which is the diagonal matrix whose diagonals are rowmarginals of F = A'. It follows that an MCA can beaccomplished via an SVD of(19) G* =where H = Rf = diag(G). We note that dropping thelargest singular value of F*, and its associated singularvector, amounts to dropping (or subtracting out) thedominant singular vector of G*. (In practice, this can beaccomplished by simpler computational steps not re-quiring an SVD of G*, but this is not critical to ourconceptual argument.)The Computational Example
We now return to the computational example. As canbe noted from Table 2, G provides a complete summaryof what is known about the incidence of attribute-levelco-occurrences. First, the three square matrices along themain diagonal of G contain diagonal elements that equalthe sums of the three submatrices of A. Thus, the entries5, 8, 5, 7 in the first main diagonal block of Table 2 arethe columnar sums of the soft drink submatrix in TableIB.
Next, let us consider the first off-diagonal submatrix
"o500
4050
1307
relating soft drink brand preferences to dollars spent onsoft drinks per week. Of die five respondents who choseCoke as most preferred, four spend $2.00 to $3.99 perweek and one spends $4.00 and more on soft drinks. Ofthe eight respondents who chose 7-Up as most preferred.
five spend less than $2.00 per week and three spend $4.00and more. This submatrix is simply a cross-tabulation ofsoft drink preferences by dollars spent on soft drinks.
The next step in MCA is to define the matrix H~'^^,as shown in Figure 1. As noted, H~'^^ is a superdiagonalmatrix whose main diagonal contains the reciprocals ofthe square roots of the block diagonal entries of G.
Next, we form the symmetric matrix product(20) G* = (H"Pre- and postmultiplication of G by H '''^ has the effectof producing a covariance-like matrix without removingmeans. Because G* has not been "centered" yet, we nextfind its rank 1 approximation G*'" via singular valuedecomposition. The trivial eigenvector associated withthis first eigenvalue (when appropriately rescaled by pre-multiplication by H"'''^, as shown shortly) consists of allconstants. We subtract G**" from G* to obtain the "cen-tered" matrix G**. The matrix G** is the matrix thatwe want to "factor," in a manner analogous to factoringa correlation or covariance matrix in principal compo-nents factor analysis. Finding the SVD of this super-matrix leads to a dimensional interpretation of the 12points in such a way that all interpoint distances betweenattribute categories are comparable.
Because G** is symmetric, its SVD is given by
(21) G** = QAQ'where Q is an orthonormal matrix of eigenvectors (withQ' denoting its transpose) and A is a diagonal matrix ofordered, non-negative eigenvalues. The final step in MCAis to take the matrices Q and A of equation 21 and H""''^of equation 20 and compute the desired coordinates ofthe solution in { dimensions as
X, is simply Xf* = Q,Ay^, the best r-dimensional ap-proximation to G**, rescaled by H~"^ .^ This is a stan-dard procedure in MCA, as described by Greenacre (1984)or Lebart, Morineau, and Warwick (1984). If one wantsto find respondent points, each is computed as the cen-troid of the attribute levels picked by a given respondent.
Figure 2 shows the two-dimensional MCA solutionobtained from the data of Table 1. The soft drink pointsare in three fairly distinct clusters, with only Coke andDr. Pepper appearing in a common cluster. All distancesamong the points in Figure 2 are comparable in terms ofa "chi square metric" defined on the A' matrix. (How-ever, if respondent points are in the same space, basedon the centroid of the items picked by each respondent,these interpoint distances would not be comparable tothose of the attribute categories.)
The Burt Matrix
As pointed out before, the matrix G is called a "Burtmatrix." Suppose we have two or more Burt matrices(computed from the same categorical variables) that have
-
MULTIPLE CORRESPONDENCE ANALYSIS 197
THE SUPERDIAGONAL AAATRIXFigure 1
ASSOCIATED WITH THE CROSS-PRODUCTS AAATRIX G
1/0
1/
1/
1/
arisen from different sources. Let Gj be the Burt matrixfor source s(s = 1,2,.. .,5). For example, two (or more)independent samples may have been taken or we mayhave sampled the same respondents at two or more pointsin time. In other cases we may want to single out a par-ticular "way" of the multiple-way contingency table andprepare subtables (and Burt matrices) for each of its cat-egories.
Because Burt matrices are analogous to uncentered co-variance matrices, we take each such matrix and com-pute G* and then G**, as noted before. Each of the G**matrices then is submitted to INDSCAL (Carroll andChang 1970).' It is important, however, that the inputdata option to INDSCAL be the covariance (rather thanthe similarities) option. In this case the "covariances"are interpreted as (approximate) scalar products. The neteffect of using the INDSCAL covariance input option isthat we are performing a symmetric CANDECOMP
(CANonical DECOMPosition) of the data."The essence of the approach is that a Burt matrix is
prepared for each separate category of some "distin-guished" categorical variable. Each Burt matrix then isconverted to G** and submitted to INDSCAL for indi-vidual differences analysis; the sources, indexed here bys, play the role of pseudosubjects. A group stimulus space(with a unique orientation) and a set of dimension weightsfor each contributory Burt matrix are obtained from theanalysis.
To complete the analogy to MCA, as based on a singleBurt matrix, we also must renormalize by a matrix ofthe form H"'''^. The question now is which H"'^^ do we
'As descriptions of the INDSCAL model have appeared in manyarticles and textbooks, we do not provide details here.
"CANDECOMP can be thought of as a three-way or higher-waygeneralization of the SVD; three-way symmetric CANDECOMP is athree-way generalization of the two-way SVD of a symmetric matrix,which in tum is equivalent to eigenvalue/eigenvector decompositionof that symmetric matrix. Ergo, symmetric CANDECOMP is a directgeneralization of eigenanalysis-based diagonalization of a two-waysymmetric matrix to a three-way array comprised of two or more sym-metric matrices. It can be viewed as providing a best least squaresapproximation to a simultaneous diagonalization of these matrices.
-
198 JOURNAL OF MARKETING RESEARCH, AAAY 1988
Figure 2ILLUSTRATIVE Af\CA CONFIGURATION
DR. PEPPEANUTS
2-$3.99M & Ms
COKE
PRETZELS
UNDER$2
DRIED FRUITS
7 UP
$4 & OVER
FRITOS
GRAPE
-
AAULTIPLE CORRESPONDENCE ANALYSIS 199
use to renormalize the INDSCAL/MCA group stimulusspace? We could apply the H~'^^ for each source sepa-rately to the group space to obtain an MCA-like solutionfor that source. This procedure, however, would give usa renormalized group space for each source, which losesthe spirit of INDSCAL. In INDSCAL a single group spaceis utilized for all sources, which differ only in the pattemof dimension weights applied to that group space.
Our compromise solution is to define
(23)
where:
(24)
and then use
H = = 1/5
1/5
as the renormalized matrix.
AN EMFIRICAL AFFUCATION OF THEBURT MATRIX/INDSCAL FROCEDURE
The suggested approach is illustrated by an industrialexample. Data were obtained from a national marketingresearch firm that conducts annual surveys of new carpurchasers. Survey respondents typically are asked tospecify the make and model purchased, other makes thatwere considered, and the trade-in make and model, ifrelevant. Other questions elicit ratings of the new car ona variety of style and performance attributes, as well aspsychographic and demographic characteristics.
The data used to illustrate the suggested approach arebased on a sample of 460 respondents who purchased1987-model vehicles. AU respondents in the sample weremarried, but only one spouse was represented in eachcase.
The sponsor firm was interested in determining howselected demographic characteristics are related to char-acteristics of the purchased vehicle and, particularly, howthese relationships might be influenced by the numberof cars in the respondent's household (including the mostrecently purchased car). Accordingly, the 460 respon-dents were divided into three subsamples based on carownership: 1 car, 2 cars, or 3 or more cars in the house-hold. Table 3 lists the seven attributesdemographic andvehicle characteristicsby which each sample was clas-sified.
Multiple Correspondence AnalysisFor comparison, multiple correspondence analysis first
was run on the total sample of 460 respondents. Figure3 shows the first two dimensions (oriented in principalaxes) that were obtained from a 3-dimensional multiplecorrespondence analysis. The configuration is notewor-thy in that the 3140 years age category is widely sep-arated from the rest of the points; hence dimension 2appears to be an age dimension. Dimension 1 shows that
Table 3UST OF ATTRIBUTES AND LEVELS USED TO CLASSIFY THE
SAAAPLE OF RECENT CAR PURCHASERS
Demographic characteristicsAge (years)
21-3031-4041-5051 and older
GenderMaleFemale
Number of chitdrenNone1-23 or more
Income / statusUpscaleDownscale
Car characteristicsPtace of manufacture
U.S. madeImport
Body type2-door4-door
Engine typeGasolineDiesel
INDSCAL attributeCars per household
123 or more
"3 or more kids" and "4-door model" are widely sep-arated from "no kids" and "2-door models." Most of thepoints, however, appear to be bunched rather denselynear the origin of the configuration. (A 3-dimensionalMCA solution did not improve the interpretation; the 3 1 -40 years category continued to be widely separated fromthe rest of the points on dimension 3.)The INDSCAL Analysis
Burt matrices next were prepared for each of the threesubsamples (based on cars per household) and trans-formed to G** matrices. The latter matrices were sub-mitted to INDSCAL (using the covariance input option,as discussed before). Solutions were sought in four throughtwo dimensions. The squared correlations between eachsolution and the input data (considered as scalar prod-ucts) were .63, .58, and .48, respectively, for four throughtwo dimensions. Figures 4 and 5 show the INDSCALgroup stimulus space for the 1-2 and 1-3 axes, respec-tively, of the 3-dimensional solution.
The dimensional weights of the three subsamples fol-low.
Dimension weightsGroup
1 car in household 1.01 1.23 1.452 cars in household 1.66 1.25 1.173 or more cars in household 1.30 1.49 1.12
An examination of Figures 4 and 5 suggests that thefirst dimension can be described as an import versus U.S.car axis; this axis also shows high separation between"no kids, downscale, 21-30 years old, gasoline engine"on the left and "1-2 kids, upscale, 41-50 years old,diesel engine" on the right.
Dimension 2 of Figure 4 appears to be a body-typedimension (2-door vs. 4-door vehicles). Dimension 3 ofFigure 5 appears to emphasize family size (3 or more
-
200 JOURNAL OF AAARKETING RESEARCH, AAAY 1988
Figure 3DIAAENSION 2 (VERTICAL) VERSUS 1 (HORIZONTAL) OF
THE AACA ANALYSIS (CAR PURCHASE DATA)
DIESEL
*31-40 YEARS
UPSCALE
GTE3 KIDS
MALE
41-50 YEARSI U.S. MADE
NO KIDS
2-DOOR
4-DOOR IMPORT
51 AND OVERGAS ENGINE
21-30 YEARSFEMALE
1-2 KIDSDOWNSCALE
-
AAULTIPLE. CORRESPONDENCE ANALYSIS 201
Figure 4DIAAENSION 2 (VERTICAL) VERSUS 1 (HORIZONTAL) OF
THE INDSCAL ANALYSIS (CAR PURCHASE DATA)
NO KIDS
U.S. MADE
2-DOOR
UPSCALE
DIESEL
51 AND OVER31-40 YEARS
FEMALE
41-50 YEARS
MALE
1-2 KIDS
IMPORT
GAS ENGINE
21-30 YEARS
DOWNSCALE GTE 3 KIDS
4-DOOR
-
202 JOURNAL OF AAARKETING RESEARCH, AAAY 1988
Figure 5DIAAENSION 3 (VERTICAL) VERSUS 1 (HORIZONTAL) OF
THE INDSCAL ANALYSIS (CAR PURCHASE DATA)
U.S. MADE
GTE 3 KIDS
4-DOORDIESEL
31-40 YEARSMALE UPSCALE
41-50 YEARS
NO KIDS ^21-30 YEARS
DOWNSCALE 51 AND OVERFEMALl
GAS ENGINE
IMPORT
2-DOOR
1-2 KIDS
-
AAULTIPLE CORRESPONDENCE ANALYSIS 203
kids vs. 1-2 kids). We also note on both figures that 4-door models tend to plot near the larger family size (3or more kids) and 2-door models plot nearer small familysize (no kids or 1-2 kids). In contrast to Figure 3 (basedon the MCA solution), age does not appear to dominatethe configuration of either Figure 4 or Figure 5. Finally,the gender variable (male vs. female) is not very dis-criminating; these two points plot relatively close to theorigin.
From the dimension weight pattem in the text tablethe 1-car household has the highest weight for dimension3, the family size variable (3 or more kids vs. 1-2 kids).The 2-car household has the highest weight for dimen-sion 1 (import vs. U.S.-made). The sample of house-holds with three or more cars shows highest salience fordimension 2 (2-door vs. 4-door).
Given the (unordered) categorical nature of many ofthe attributes, our use of the term "dimension" is to betaken advisedly. In other problems, the attributes maytake the form of ordered categories or possibly discretepoints along an assumed underlying continuum. From amanagerial standpoint the sponsor was able to find outthat size of car inventory was associated with the relativesalience of the INDSCAL-based dimensions. In partic-ular, diesel engine purchasers (one of the attributes ofinterest to the sponsor) were primarily upscale and olderpersons who tended to buy imports.
Though in our example the Burt matrix/INDSCALapproach produces a stimulus configuration markedlydifferent from that of MCA, it may not always do so.In some cases the INDSCAL axes line up fairly closelywith the principal axis orientation of MCA. Even there,however, the estimation of differential axis weights maybe sufficient to justify the INDSCAL approach (if onlyas an adjunct to the more conventional MCA).
REFERENCES
Benz6cri, J. P. (1969), "Statistical Analysis as a Tool to MakePattems Emerge from Data," in Methodologies of PatternRecognition, S. Watanabe, ed. New York: Academic Press,Inc., 35-74.
Burt, Cyril (1950), "The Factorial Analysis of Qualitative Data,"British Journal of Psychology (Statistical Section), 3, 166-85.
Carroll, J. Douglas (1968), "Generalization of Canonical Cor-relation Analysis to Three or More Sets of Variables," inProceedings of 76th Annual Conference of the AmericanPsychological Association, 3, 227-8.
and Jih-Jie Chang (1970), "Analysis of Individual Dif-ferences in Multidimensional Scaling via an N-way Gen-
eralization of Eckart-Young Decomposition," Psychome-trika, 35 (September), 283-319.
-, Paul E. Green, and Catherine M. Schaffer (1986),"Interpwint Distance Comparisons in Correspondence Anal-ysis," Journal of Marketing Research, 23 (August), 271-80.
, , and (1987), "Comparing InterpointDistances in Correspondence Analysis: A Clarification,"Journal of Marketing Research, 24 (November), 445-50.
Gifi, A. (1981), Non-Linear Multivariate Analysis. Leiden, TheNetherlands: Department of Data Theory, University of Lei-den.
Green, Paul E. with J. Douglas Carroll (1976), MathematicalTools for Applied Multivariate Analysis. New York: Aca-demic Press, Inc.
, Vithala R. Rao, and Wayne S. DeSarbo (1978), "In-corporating Group-Level Similarity Judgments in ConjointAnalysis," Journal of Consumer Research, 5 (December),187-93.
Greenacre, Michael J. (1984), Theory and Application of Cor-respondence Analysis. London: Academic Press, Inc.
and Trevor Hastie (1987), "The Geometric Interpre-tation of Correspondence Analysis," Journal of the Ameri-can Statistical Association, 82 (June), 437-47.
Heiser, Willem J. (1981), Unfolding Analysis of Proximity Data.Leiden, The Netherlands: Department of Data Theory, Uni-versity of Leiden.
Hoffman, Donna L. and George R. Franke (1986), "Corre-spondence Analysis: Graphical Representation of Categori-cal Data in Marketing Research," Journal of Marketing Re-search, 23 (August), 213-27.
Holbrook, Morris, William Moore, and Russell S. Winer (1982),"Constructing Joint Spaces fiwm Pick-Any Data: A New Toolfor Consumer Analysis," Journal of Consumer Research, 9(June), 99-105.
Jambu, M. and M. O. Lebeaux (1983), Cluster Analysis andData Analysis. Amsterdam: North-Holland Publishing Co.
Lebart, Ludovic and Alain Morineau (1982), "SPAD: A Sys-tem of FORTRAN Programs for Correspondence Analysis,"Journal of Marketing Research, 19 (November), 608-9.
, , and Kenneth M. Warwick (1984), Multi-variate Descriptive Statistical Analysis: CorrespondenceAnalysis and Related Techniques for Large Matrices. NewYork: John Wiley & Sons, Inc.
Meulman, Jacqueline (1982), Homogeneity Analysis of Incom-plete Data. Leiden, The Netheriands: DSWO Press.
Nishisato, Shizuhiko (1980), Analysis of Categorical Data: DualScaling and Its Applications. Toronto: University of TorontoPress.
and Ira Nishisato (1983), An Introduction to DualScaling. Islington, Ontario: MicroStats.
Saporta, G. (1980), "About Some Remarkable Properties ofGeneralized Canonical Analysis," paper presented at Sec-ond European Meeting of the Psychometric Society, Gron-ingen. The Netherlands (June).