Personal Finance 101 “The Big Picture”. Section 1 Big Picture Basics.
Return to Big Picture
-
Upload
philippa-demi -
Category
Documents
-
view
30 -
download
0
description
Transcript of Return to Big Picture
Title
Return to Big PictureMain statistical goals of OODA:Understanding population structureLow dimal Projections, PCA Classification (i. e. Discrimination)Understanding 2+ populationsTime Series of Data ObjectsChemical Spectra, Mortality DataVertical Integration of Data Types
1Return to Big PictureMain statistical goals of OODA:Understanding population structureLow dimal Projections, PCA Classification (i. e. Discrimination)Understanding 2+ populationsTime Series of Data ObjectsChemical Spectra, Mortality DataVertical Integration of Data TypesQing Feng: JIVE
2Return to Big PictureMain statistical goals of OODA:Understanding population structureLow dimal Projections, PCA Classification (i. e. Discrimination)Understanding 2+ populationsTime Series of Data ObjectsChemical Spectra, Mortality DataVertical Integration of Data Types
3Classification - DiscriminationBackground: Two Class (Binary) version:Using training data fromClass +1 and Class -1Develop a rule for assigning new data to a ClassCanonical Example: Disease DiagnosisNew Patients are Healthy or IllDetermine based on measurements4Fisher Linear DiscriminationSimple way to find correct cov. adjustment:Individually transform subpopulations sospherical about their means
For define
5HDLSSod1egFLD.psFisher Linear DiscriminationSo (in origl space) have separting hyperplane with:Normal vector: Intercept:
6HDLSSod1egFLD.psClassical DiscriminationAbove derivation of FLD was:NonstandardNot in any textbooks(?)Nonparametric (dont need Gaussian data)I.e. Used no probability distributionsMore Machine Learning than Statistics7Classical DiscriminationFLD Likelihood View (cont.)Replacing , and by maximum likelihood estimates:, and Gives the likelihood ratio discrimination rule:Choose Class +1, when
Same as above, so: FLD can be viewed asLikelihood Ratio Rule
8Classical DiscriminationSummary of FLD vs. GLR:Tilted Point Clouds DataFLD goodGLR goodDonut DataFLD badGLR goodX DataFLD badGLR OK, not greatClassical Conclusion: GLR generally better(will see a different answer for HDLSS data)9Classical DiscriminationFLD Generalization II (Gen. I was GLR)Different prior probabilitiesMain idea: Give different weights to 2 classesI.e. assume not a priori equally likelyDevelopment is straightforwardModified likelihoodChange intercept in FLDWont explore further here 10Classical DiscriminationFLD Generalization IIIPrincipal Discriminant AnalysisIdea: FLD-like approach to > 2 classes11Classical DiscriminationFLD Generalization IIIPrincipal Discriminant AnalysisIdea: FLD-like approach to > 2 classesAssumption: Class covariance matrices are the same (similar)(but not Gaussian, same situation as for FLD)12Classical DiscriminationPrincipal Discriminant Analysis (cont.)But PCA only works like Mean Difference,Expect can improve bytaking covariance into account.Blind application of above ideas suggests eigen-analysis of:
13Classical DiscriminationSummary of Classical Ideas:Among Simple MethodsMD and FLD sometimes similarSometimes FLD betterSo FLD is preferredAmong Complicated MethodsGLR is bestSo always use that?Caution:Story changes for HDLSS settings14HDLSS DiscriminationMain HDLSS issues:Sample Size, n < Dimension, dSingular covariance matrixSo cant use matrix inverseI.e. cant standardize (sphere) the data(requires root inverse covariance)Cant do classical multivariate analysis15HDLSS DiscriminationAdd a 3rd Dimension (noise)Project on 2-d subspace generated by optimal dirn & by FLD dirn
16HDLSS DiscriminationMovie Through Increasing Dimensions
17HDLSS DiscriminationFLD in Increasing Dimensions:Low dimensions (d = 2-9):Visually good separationSmall angle between FLD and OptimalGood generalizabilityMedium Dimensions (d = 10-26):Visual separation too good?!?Larger angle between FLD and OptimalWorse generalizabilityFeel effect of sampling noise18HDLSS DiscriminationFLD in Increasing Dimensions:High Dimensions (d=27-37):Much worse angleVery poor generalizabilityBut very small within class variationPoor separation between classesLarge separation / variation ratio19HDLSS DiscriminationFLD in Increasing Dimensions:At HDLSS Boundary (d=38):38 = degrees of freedom(need to estimate 2 class means)Within class variation = 0 ?!?Data pile up, on just two pointsPerfect separation / variation ratio?But only feels microscopic noise aspectsSo likely not generalizableAngle to optimal very large20HDLSS DiscriminationFLD in Increasing Dimensions:Just beyond HDLSS boundary (d=39-70):Improves with higher dimension?!?Angle gets betterImproving generalizability?More noise helps classification?!?
21HDLSS DiscriminationFLD in Increasing Dimensions:Far beyond HDLSS bounry (d=70-1000):Quality degradesProjections look terrible(populations overlap)And Generalizability falls apart, as wellMaths worked out by Bickel & Levina (2004)Problem is estimation of d x d covariance matrix
22HDLSS DiscriminationSimple Solution:Mean Difference (Centroid) MethodRecall not classically recommendedUsually no better than FLDSometimes worseBut avoids estimation of covarianceMeans are very stableDont feel HDLSS problem
23HDLSS DiscriminationMean Difference (Centroid) MethodFar more stable over dimensionsBecause is likelihood ratio solution(for known variance - Gaussians)Doesnt feel HDLSS boundaryEventually becomes too good?!?Widening gap between clusters?!?Careful: angle to optimal growsSo lose generalizability (since noise incs)HDLSS data present some odd effects24