Dealing With Statistical Uncertainty

29
Dealing With Statistical Uncertainty Richard Mott Wellcome Trust Centre for Human Genetics

description

Dealing With Statistical Uncertainty. Richard Mott Wellcome Trust Centre for Human Genetics. Synopsis. Hypothesis testing P-values Confidence intervals T-test Non-parametric tests Sample size calculations Permutations Bootstrap. Hypothesis testing. Null Hypothesis H 0. - PowerPoint PPT Presentation

Transcript of Dealing With Statistical Uncertainty

Page 1: Dealing With Statistical Uncertainty

Dealing With Statistical Uncertainty

Richard MottWellcome Trust Centre for Human

Genetics

Page 2: Dealing With Statistical Uncertainty

Synopsis

• Hypothesis testing • P-values• Confidence intervals• T-test• Non-parametric tests• Sample size calculations • Permutations• Bootstrap

Page 3: Dealing With Statistical Uncertainty

Hypothesis testing

Null Hypothesis H0

Examples of H0:– Mean of a population is 3.0

– In a genetic association study, there is no association between disease state and the genotypes of a particular SNP

Alternative Hypothesis H1

• Examples of H1:– Mean of population > 3.0

– There is an association between disease and genotype

Page 4: Dealing With Statistical Uncertainty

The Likelihood Ratio Test• General Framework for constructing hypothesis tests:

S = Likelihood( data | H1) / Likelihood( data | H0)

Reject H0 if S > s(H0, H1)

Threshold s is chosen such that there is a probability of making a false rejection under H0.

is the size of the test (or false positive rate or Type I error) e.g. =0.05 is the power of the test, the probability of correctly rejecting H0 when H1 is true e.g. =0.81- is the type II error, the false negative rate

Generally, for fixed sample size n, if we fix then we can’t fix If we fix and then we must let n vary.

The Neyman Pearson Lemma states that the likelihood ratio test is the most powerful of all tests of a given size

Type III error: "correctly rejecting the null hypothesis for the wrong reason".

Page 5: Dealing With Statistical Uncertainty

Asymptotic Distribution of log-likelihood ratio statistics

• H0 is a submodel of H1

• parameters • are fixed o under H0

• unknown 1 under H1, estimated by max likelihood

• max likelihood estimates under H1 are – ie at

ˆ θ

∂ log L(θ )∂θ = 0

= ˆ θ

Page 6: Dealing With Statistical Uncertainty

Taylor expansion of log-likelihood ratio

logL( ˆ θ )= logL(θ0) + ( ˆ θ −θ0)∂

∂θlogL(θ0)

+1

2( ˆ θ −θ0)'

∂ 2

∂θ 2logL(θ0)( ˆ θ −θ0) + ...

logL( ˆ θ )− logL(θ0) ≈1

2( ˆ θ −θ0)'

∂ 2

∂θ 2logL(θ0)( ˆ θ −θ0)

a quadratic form with rank k (= difference in number of free parameters)

Asymptotic distribution of 2 * log-likelihood ratio is

χk2

Page 7: Dealing With Statistical Uncertainty

Example: A Simple Contingency Table

• Objects are classified into K classes– Probability of class i is pi

• Ho: Probability of each class is the same = 1/K

• H1: Probability of classes are different

• Data: Ni observed in class i

Page 8: Dealing With Statistical Uncertainty

Contingency TableLikelihood-ratio test

L =

piN i N i!

i

∏N!

log(L)= N i log pi +Ci

log(L | H0)= N i log(1/K) + Ci

max(log(L | H1)= N i log(N i /N)i

2(max(log(L | H1)− log(L | H0)) = 2 N i log(N iK /N) = 2 Oi log(Oi / E i)i

∑i

Likelihood

log-Likelihood

log-Likelihood under Ho

maximised log-Likelihood under H1

2*log-Likelihood ratio

(Oi − Ei )2 / Ei

i

∑ Compare to chi-square test, both distributed as

χK−12

Page 9: Dealing With Statistical Uncertainty

P-values• The P-value of a statistic Z is the

probability of sampling a value more extreme than that observed value z if the null hypothesis is true.

• For a one-sided test, the p-value would be = Prob( Z > z | H0)

• For a two-sided test it is = Prob( |Z| > z | H0)

• Note this is not the probability density at x, but the tails of the cumulative distribution function

Upper and lower 2.5% tails of the N(0,1) distribution, corresponding to |z|=1.96

Page 10: Dealing With Statistical Uncertainty

Power• The power of a statistic Z is the

probability of sampling a value more extreme than that observed value z if the alternative hypothesis is true.

• For a one-sided test, the power would be = Prob( Z > z | H1)

• For a two-sided test it is = Prob( |Z| > z | H1)

• Note this is not the probability density at x, but the tails of the cumulative distribution function

H0: N(1,0) H1: N(1.5,1) H2: N(4,1)0.05 = Prob( Z> 1.644|H0) (5% upper tail)0.44 = Prob( Z> 1.644|H1)0.99 = Prob( Z> 1.644|H2)

Page 11: Dealing With Statistical Uncertainty

Example: The mean of a Normal Distribution

H0: mean = 0 vs H1: mean = 1

data: y1, … yn iid N(,2) Assume variance is the same and is known

Therefore we base all inferences on the sample mean compared to the difference in the hypothesised means

Page 12: Dealing With Statistical Uncertainty

The Normal and T distributions• For a sample from a Normal distribution with known mean and variance, the sample mean can be

standardised to follow the standard Normal distribution

• And the 95% confidence interval for the mean is given by

• But we often wish to make inferences about samples from a Normal Distribution when the variance is unknown: The variance must be estimated from the data as the sample variance

• Because of this uncertainty the distribution of the sample mean is broader, and follows the Tn-1 distribution

Page 13: Dealing With Statistical Uncertainty

The T distribution

• The Tn and Normal distributions are almost identical for n>20,

• As a rule of thumb, an approximate 95% confidence interval for n>20 is

Page 14: Dealing With Statistical Uncertainty

T-tests• The T-test compares the means of samples

• The T-test is optimal (in LR sense) if the data are sampled from a Normal distribution

• Even if the data are not Normal the test is useful in large samples

• There are several versions, depending on the details

Page 15: Dealing With Statistical Uncertainty

T tests for the comparison of sample meansOne Sample Test

• One sample y1 ….. yn– H0: the sample mean = 0

– H1: the sample mean != 0

– Reject H0 if

– tn-1() is the quantile of the Tn-1 distribution such that the probability of exceeding tn-1() is

– Two sided test

– in R, tn-1(0.025) = pt(0.025,n-1)

Page 16: Dealing With Statistical Uncertainty

T tests for the comparison of sample means Paired T-test

• Two samples of paired data (x1,y1), (x2-y2) …. (xn,yn)

• Example:– two experimental replicates taken from n individuals, we

wish to test if the replicates are statistically similar• H0: mean(x) = mean(y)• Take differences d1= x1 - y1 … dn= xn - yn

• H0: mean(d) = 0• One sample T-test with 0=0

Page 17: Dealing With Statistical Uncertainty

T tests for the comparison of sample means Two-Sample T-test

• Two samples of unpaired data x1, x2 ….. xn

y1, y2 ….. ym

• H0: mean(x) = mean(y)• If we assume the variance is the same in each group then we can estimate the variance as

the pooled estimator

• The test statistic is

(The case with unequal variances is more complicated)

Page 18: Dealing With Statistical Uncertainty

T tests in Rt.test(x, y = NULL, alternative = c("two.sided", "less", "greater"), mu = 0, paired = FALSE, var.equal = FALSE, conf.level = 0.95, ...)

• x a numeric vector of data values. • y an optional numeric vector data values. • alternative a character string specifying the alternative hypothesis, must be one of

"two.sided" (default), "greater" or "less". • mu a number indicating the true value of the mean • paired a logical indicating whether you want a paired t-test. • var.equal a logical variable indicating whether to treat the two variances as being

equal. • conf.level confidence level of the interval. • formula a formula of the form lhs ~ rhs where lhs is a numeric variable giving the

data values and rhs a factor with two levels giving the corresponding groups. • data an optional data frame containing the variables in the model formula. • subset an optional vector specifying a subset of observations to be used.

Page 19: Dealing With Statistical Uncertainty

T tests in R

samp1 <- rnorm( 20, 0, 1) # sample 20 numbers from N(0,1) distribution

samp2 <- rnorm( 20, 1.5, 1) # sample from N(1.5,1)

t.test( samp1, samp2,var.equal=TRUE )

Two Sample t-test

data: samp1 and samp2 t = -3.5892, df = 38, p-value = 0.000935alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: -1.9272037 -0.5372251 sample estimates:mean of x mean of y 0.4040969 1.6363113

Page 20: Dealing With Statistical Uncertainty

Outliers A small number of contaminating observations with different distribution

• Outliers can destroy statistical genuine significance

Reason: the estimate of the variance is inflated, reducing the T statistic

• And sometimes create false significance : Type III error: "correctly rejecting the null hypothesis for the wrong reason".

• There are Robust Alternatives to the T-test: Wilcoxon tests

> samp3 <- samp2> samp3[1] = 40 # add one outlier

> t.test( samp1, samp3,var.equal=TRUE )

Two Sample t-test

data: samp1 and samp3 t = -1.5911, df = 38, p-value = 0.1199alternative hypothesis: true difference in means is not

equal to 0 95 percent confidence interval: -7.0519594 0.8450837 sample estimates:mean of x mean of y 0.4040969 3.5075347

> var(samp2)[1] 1.160251> var(samp3)[1] 74.88977

Page 21: Dealing With Statistical Uncertainty

Non-parametric tests:Wilcoxon signed rank test

(alternative to Paired T-test)

• Two samples of paired data (x1,y1), (x2-y2) …. (xn,yn)

• Take differences d1= x1 - y1 … dn= xn – yn

• H0: distribution of d’s is symmetric about 0

• Compute the ranks of the absolute differences, rank(|d1|) etc

• Compute the signs of the differences• Compute W, the sum of the ranks with positive signs• If H0 true then W should be close to the mean rank n/2

• The distribution of W is known and for n>20 a Normal approximation is used.

• in R wilcox.test(x, y, alternative = c("two.sided", "less", "greater"), mu = 0,

paired = TRUE, exact = NULL, correct = TRUE, conf.int = FALSE, conf.level = 0.95, ...)

Page 22: Dealing With Statistical Uncertainty

Wilcoxon rank sum test(alternative to unpaired T-test)

• Two samples of unpaired data x1, x2 ….. xn

y1, y2 ….. ym

• Arrange all the observations into a single ranked series of length N = n+m. • R1 = sum of the ranks for the observations which came from sample 1. The sum of

ranks in sample 2 follows by calculation, since the sum of all the ranks equals N(N + 1)/2

• Compute U = min(R1 ,R2) - n(n + 1)/2.

• z = (U –)/ Normal approximation = nm/2 = nm(n=m+1)/12

• The Wilcoxon Rank sum test is only about 5% less efficient than the unpaired T-test (in terms of the sample size required to achieve the same power) in large samples.

Page 23: Dealing With Statistical Uncertainty

Wilcoxon rank sum test(alternative to unpaired T-test)

>wilcox.test(samp1,samp2)

Wilcoxon rank sum test

data: samp1 and samp2 W = 82, p-value = 0.001041alternative hypothesis: true mu is not equal to 0

> wilcox.test(samp1,samp3) # with outliers

Wilcoxon rank sum test

data: samp1 and samp3 W = 82, p-value = 0.001041alternative hypothesis: true mu is not equal to 0

Page 24: Dealing With Statistical Uncertainty

Binomial Sign Test(T-test on one sample)

• One sample y1 ….. yn

H0: the sample mean = 0

• Count M, the number of y’s that are > 0

• If H0 is true, this should be about n/2, and should be distributed as a Binomial random variable B(n/2,n)

Page 25: Dealing With Statistical Uncertainty

Permutation TestsAnother non-parametric Alternative

• Permutation is a general principle with many applications to hypothesis testing.

• But it is not useful for parameter estimation• The method is best understood by an example:

– Suppose a data set is classified into N groups. We are interested in whether the differences between group means (eg if N=2 then the two sample T test or Wilcoxon Rank Sum test is appropriate)

– Compute a statistic S (eg the difference between the maximum and minimum group mean)

– If there are no differences between groups then any permutation of the group labellings between individuals should produce much the same result for S

– Repeat i = 1 to M times:• permute the data• compute the statistic on the permuted data, Si

– The Permutation p-value is the fraction permutations that exceed S• It’s that simple!

Page 26: Dealing With Statistical Uncertainty

Fast Permutation in Linear Models

– M computed using singular value techniques– ANOVA F statistics can be calculated from

• computing M is slow, but need only be done once• permutation test:– permute y, keep X and M fixed

y = Xb + e

ˆ y = X(X 'X )−1Xy = My

ˆ y perm = Myperm

y, ˆ y

Page 27: Dealing With Statistical Uncertainty

Multiple Testing• Conventionally, the significance threshold for a test is = 0.05, or 0.01• However, Statistical Genetics problems frequently involve multiple testing – eg a

genome scan may test over one million SNPs for association, a gene expression microarray will screen tens of thousands of transcripts.

• On the one hand, significance thresholds should be adjusted to be more conservative, in order to prevent too many false positives occurring.– In N independent tests, the expected P-value of the most significant result to occur by chance

is 1/(N+1) [because P-values are uniformly distributed on [0,1]]– Whilst it is not true that tests are independent, it is a surprisingly good guide to what will be

observed in reality. – Often, at these scales, P-values are reported as –log10(P-value), so that N=106 implies the

most significant P-value is about 10-6 , ie logP = 6.• On the other hand, raising the significance threshold lowers the power to detect

associations, – so larger sample sizes are needed to compensate– or a different approach, based around False Discovery Rates (FDR) is employed– The idea behind FDR is to choose a laxer threshold so that many false positives may occur, but

to limit the proportion of false positives to say 10%. – The FDR is useful when the purpose of the screen is to identify items (SNPs, genes, etc) for

further experimental investigation, and which will provide more information.

Page 28: Dealing With Statistical Uncertainty

Bootstrapping• Bootstrapping is a general way to evaluate the uncertainty in estimates of

statistics whilst making few assumptions about the data• It is particularly useful for complex statistics whose distribution cannot be

evaluated analytically• Example:

– In a gene expression study, we wish to find a minimal set of SNPs that in combination will predict the variation in a phenotype.

– Suppose we have a deterministic algorithm A that will find us a plausible set of SNPs (ie a model). • This is likely to be a complex programme and contain some arbitrary steps. • It is also unlikely to give any measure of uncertainty to the model.

– There may be many possible sets of SNPs with similar claims, so we would like a way to average across different models, and evaluate the evidence for a SNP as the fraction of models in which the SNP occurs• To do this, we need a way to perturb the data, to generate new data sets on which we

can apply the procedure A• Bootstrapping is one way to do this

Page 29: Dealing With Statistical Uncertainty

BootstrappingGiven a set of n things S = {x1 …. x2}, a bootstrap sample is formed by sampling with

replacement from S, to create a new set S* of size n

OriginalResample

This means some elements of S will be missing in S* and some will be present multiply often (about 1/e = 63% of elements will be present in S* on average)

S* can be used as a replacement for S in any algorithm that works on S, and we can repeatedly draw further bootstrap samples.

Any statistic Z that can be calculated on S can also be calculated on S*, and so we can construct the bootstrap sampling distribution of Z from repeated samples.

From this we can make inferences about the variation of Z, eg construct a 95% confidence interval (if Z is numeric)