Consensual gene co-expression network inference with multiple samples

Post on 07-Nov-2014

166 views 2 download

Tags:

description

Séminaire Biostat, Plateforme biostatistique de Toulouse March, 19th, 2013

Transcript of Consensual gene co-expression network inference with multiple samples

Consensual gene co-expression networkinference with multiple samples

Nathalie Villa-Vialaneix(1,2)

http://www.nathalievilla.org

nathalie.villa@univ-paris1.fr

Joint work with Magali SanCristobal and Laurence Liaubet

Groupe de travail biostatistique - 19 mars 2013

(1) (2)

Consensus LASSO (INRA de Toulouse, MIAT) Nathalie Villa-Vialaneix Toulouse, 19 mars 2013 1 / 21

Overview on network inference

Outline

1 Overview on network inference

2 Graphical Gaussian Models

3 Inference with multiple samples

4 Illustration

Consensus LASSO (INRA de Toulouse, MIAT) Nathalie Villa-Vialaneix Toulouse, 19 mars 2013 2 / 21

Overview on network inference

Framework

Data: large scale gene expression data

individualsn ' 30/50

X =

. . . . . .

. . X ji . . .

. . . . . .

︸ ︷︷ ︸variables (genes expression), p'103/4

What we want to obtain: a graph/network with

• nodes: genes;

• edges: “significant” and direct co-expression between two genes(track transcription regulations).

Consensus LASSO (INRA de Toulouse, MIAT) Nathalie Villa-Vialaneix Toulouse, 19 mars 2013 3 / 21

Overview on network inference

Modeling multiple interactions between genes with anetworkCo-expression networks

• nodes: genes

• edges: “direct” co-expressionbetween two genes

Co-expression networks

• nodes: genes

• edges: “direct” co-expression between two genes

Method:

“Correlations” Thresholding Graph

Consensus LASSO (INRA de Toulouse, MIAT) Nathalie Villa-Vialaneix Toulouse, 19 mars 2013 4 / 21

Overview on network inference

Modeling multiple interactions between genes with anetworkCo-expression networks

• nodes: genes

• edges: “direct” co-expression between two genes

Method:

“Correlations” Thresholding GraphConsensus LASSO (INRA de Toulouse, MIAT) Nathalie Villa-Vialaneix Toulouse, 19 mars 2013 4 / 21

Overview on network inference

Correlations/Partial correlations

strong indirect correlationy z

x

set.seed(2807); x <- runif(100)

y <- 2*x+1 + rnorm(100,0,0.1); cor(x,y); [1] 0.9870407

z <- -x+2 + rnorm(100,0,0.1); cor(x,z); [1] -0.9443082

cor(y,z) [1] -0.9336924

cor(lm(y x)$residuals,lm(z x)$residuals) [1] -0.03071178

Consensus LASSO (INRA de Toulouse, MIAT) Nathalie Villa-Vialaneix Toulouse, 19 mars 2013 5 / 21

Overview on network inference

Correlations/Partial correlations

Partial correlationCor (z, y |x)

Correlation between residuals:set.seed(2807); x <- runif(100)

y <- 2*x+1 + rnorm(100,0,0.1); cor(x,y); [1] 0.9870407

z <- -x+2 + rnorm(100,0,0.1); cor(x,z); [1] -0.9443082

cor(y,z) [1] -0.9336924

cor(lm(y x)$residuals,lm(z x)$residuals) [1] -0.03071178

Consensus LASSO (INRA de Toulouse, MIAT) Nathalie Villa-Vialaneix Toulouse, 19 mars 2013 5 / 21

Overview on network inference

Advantages of a network approach1 over raw data and correlation network (relevance network,

[Butte and Kohane, 1999]): focuses on direct links;

2 over raw data (again): focuses on “significant” links (more robust)

3 over bibliographic network: can handle interactions with yetunknown (not annotated) genes

Consensus LASSO (INRA de Toulouse, MIAT) Nathalie Villa-Vialaneix Toulouse, 19 mars 2013 6 / 21

Overview on network inference

Advantages of a network approach1 over raw data and correlation network (relevance network,

[Butte and Kohane, 1999]): focuses on direct links;

2 over raw data (again): focuses on “significant” links (more robust)

3 over bibliographic network: can handle interactions with yetunknown (not annotated) genes

Consensus LASSO (INRA de Toulouse, MIAT) Nathalie Villa-Vialaneix Toulouse, 19 mars 2013 6 / 21

Overview on network inference

Advantages of a network approach1 over raw data and correlation network (relevance network,

[Butte and Kohane, 1999]): focuses on direct links;

2 over raw data (again): focuses on “significant” links (more robust)

3 over bibliographic network: can handle interactions with yetunknown (not annotated) genes

Consensus LASSO (INRA de Toulouse, MIAT) Nathalie Villa-Vialaneix Toulouse, 19 mars 2013 6 / 21

Graphical Gaussian Models

Outline

1 Overview on network inference

2 Graphical Gaussian Models

3 Inference with multiple samples

4 Illustration

Consensus LASSO (INRA de Toulouse, MIAT) Nathalie Villa-Vialaneix Toulouse, 19 mars 2013 7 / 21

Graphical Gaussian Models

Theoretical frameworkGaussian Graphical Models (GGM) X ∼ N(0,Σ) gene expressionsSeminal work [Schäfer and Strimmer, 2005], R package GeneNet:estimation of the partial correlations

πjj′ = Cor(X j ,X j′ |Xk , k , j, j′)

from the concentration matrix S = Σ−1:

πjj′ = −Sjj′√SjjSj′j′

.

Main issue: p � n⇒ Σ badly conditioned⇒ estimating S from Σ̂−1 is abad idea... Schafer & Strimmer’s proposal:

1 use Σ̂ + λI rather than Σ̂ to estimate S;2 select only the most significant Sjj (Bayesian test):

S ∼ (1 − η0)fA + η0f0

with f0: distribution of the “null” edges and η0 proportion of null edgesamong the partial correlations values (close to 1).

Consensus LASSO (INRA de Toulouse, MIAT) Nathalie Villa-Vialaneix Toulouse, 19 mars 2013 8 / 21

Graphical Gaussian Models

Theoretical frameworkGaussian Graphical Models (GGM) X ∼ N(0,Σ) gene expressionsSeminal work [Schäfer and Strimmer, 2005], R package GeneNet:estimation of the partial correlations

πjj′ = Cor(X j ,X j′ |Xk , k , j, j′)

from the concentration matrix S = Σ−1:

πjj′ = −Sjj′√SjjSj′j′

.

Main issue: p � n⇒ Σ badly conditioned⇒ estimating S from Σ̂−1 is abad idea...

Schafer & Strimmer’s proposal:1 use Σ̂ + λI rather than Σ̂ to estimate S;2 select only the most significant Sjj (Bayesian test):

S ∼ (1 − η0)fA + η0f0

with f0: distribution of the “null” edges and η0 proportion of null edgesamong the partial correlations values (close to 1).

Consensus LASSO (INRA de Toulouse, MIAT) Nathalie Villa-Vialaneix Toulouse, 19 mars 2013 8 / 21

Graphical Gaussian Models

Theoretical frameworkGaussian Graphical Models (GGM) X ∼ N(0,Σ) gene expressionsSeminal work [Schäfer and Strimmer, 2005], R package GeneNet:estimation of the partial correlations

πjj′ = Cor(X j ,X j′ |Xk , k , j, j′)

from the concentration matrix S = Σ−1:

πjj′ = −Sjj′√SjjSj′j′

.

Main issue: p � n⇒ Σ badly conditioned⇒ estimating S from Σ̂−1 is abad idea... Schafer & Strimmer’s proposal:

1 use Σ̂ + λI rather than Σ̂ to estimate S;2 select only the most significant Sjj (Bayesian test):

S ∼ (1 − η0)fA + η0f0

with f0: distribution of the “null” edges and η0 proportion of null edgesamong the partial correlations values (close to 1).

Consensus LASSO (INRA de Toulouse, MIAT) Nathalie Villa-Vialaneix Toulouse, 19 mars 2013 8 / 21

Graphical Gaussian Models

Sparse regression approach

[Meinshausen and Bühlmann, 2006, Friedman et al., 2008] Partialcorrelations can also be estimated by using linear models: ∀ j

X j = βTj X−j + ε

In the Gaussian framework: βjj′ = −Sjj′

Sjj.

Consequence: the sparse penalty yields to βjj′ = 0 for most coefficients(“all-in-one” approach: no thresholding step needed).

Consensus LASSO (INRA de Toulouse, MIAT) Nathalie Villa-Vialaneix Toulouse, 19 mars 2013 9 / 21

Graphical Gaussian Models

Sparse regression approach

[Meinshausen and Bühlmann, 2006, Friedman et al., 2008] Partialcorrelations can also be estimated by using linear models: ∀ j

X j = βTj X−j + ε

In the Gaussian framework: βjj′ = −Sjj′

Sjj.

Independant regressions:

max(βjj′ )j′

log MLj − λ∑j′,j

|βjj′ |

with log MLj ∼ −

∑ni

(X j

i −∑

j′,j βjj′Xj′

i

)2.

Consequence: the sparse penalty yields to βjj′ = 0 for most coefficients(“all-in-one” approach: no thresholding step needed).

Consensus LASSO (INRA de Toulouse, MIAT) Nathalie Villa-Vialaneix Toulouse, 19 mars 2013 9 / 21

Graphical Gaussian Models

Sparse regression approach

[Meinshausen and Bühlmann, 2006, Friedman et al., 2008] Partialcorrelations can also be estimated by using linear models: ∀ j

X j = βTj X−j + ε

In the Gaussian framework: βjj′ = −Sjj′

Sjj.

Global approach: Graphical Lasso (R package glasso)

max(βjj′ )jj′

∑j

log MLj + λ∑j,j′|βjj′ |

Consequence: the sparse penalty yields to βjj′ = 0 for most coefficients(“all-in-one” approach: no thresholding step needed).

Consensus LASSO (INRA de Toulouse, MIAT) Nathalie Villa-Vialaneix Toulouse, 19 mars 2013 9 / 21

Graphical Gaussian Models

Other methods/packages to infer networks

• relevance (correlation) networks: R package WGCNA• Bayesian networks: R package bnlearn

[Pearl, 1998, Pearl and Russel, 2002, Scutari, 2010]• networks based on mutual information: R package minet

[Meyer et al., 2008]• networks based on random forest [Huynh-Thu et al., 2010]

See also:

• http://cran.r-project.org/web/views/gR.html (CRAN taskview on graphical methods)

• https://www.coursera.org/course/pgm (Daphne’s Koller on-linecourse on “Probabilistic Graphical Models”, starts on April, 8th)

• https://www.coursera.org/course/netsysbio (On-line courseon “Network Analysis in Systems Biology”)

Consensus LASSO (INRA de Toulouse, MIAT) Nathalie Villa-Vialaneix Toulouse, 19 mars 2013 10 / 21

Inference with multiple samples

Outline

1 Overview on network inference

2 Graphical Gaussian Models

3 Inference with multiple samples

4 Illustration

Consensus LASSO (INRA de Toulouse, MIAT) Nathalie Villa-Vialaneix Toulouse, 19 mars 2013 11 / 21

Inference with multiple samples

Multiple networks inferenceTranscriptomic data coming from several different conditions.Examples:• genes expression from pig muscle in Landrace and Large white

breeds;• genes expression from obese humans after and before a diet.

• Assumption: Acommon functioningexists regardless thecondition;

• Which genes arecorrelatedindependentlyfrom/depending on thecondition?

Consensus LASSO (INRA de Toulouse, MIAT) Nathalie Villa-Vialaneix Toulouse, 19 mars 2013 12 / 21

Inference with multiple samples

Multiple networks inferenceTranscriptomic data coming from several different conditions.Examples:• genes expression from pig muscle in Landrace and Large white

breeds;• genes expression from obese humans after and before a diet.

• Assumption: Acommon functioningexists regardless thecondition;

• Which genes arecorrelatedindependentlyfrom/depending on thecondition?

Consensus LASSO (INRA de Toulouse, MIAT) Nathalie Villa-Vialaneix Toulouse, 19 mars 2013 12 / 21

Inference with multiple samples

Dataset description

“DeLiSus” dataset

• variables: expression of 81 genes (selected by Laurence)

• conditions: two breeds (33 “Landrace” and 51 “Large white”; 84 pigs)

Consensus LASSO (INRA de Toulouse, MIAT) Nathalie Villa-Vialaneix Toulouse, 19 mars 2013 13 / 21

Inference with multiple samples

“DeLiSus” dataset (restricted dataset with 84 genes (51 pigs))

Density Transitivity % shared[1] GeneNet 0.00 0.71 0.46[2] simone, MB-AND 0.05 0.08 0.17[3] simone, Fried. 0.05 0.19 0.22[4] simone, intertwined 0.05 0.09 0.52[5] simone, CoopLasso 0.06 0.09 0.88[6] simone, GroupLasso 0.04 0.07 0.99

[1] [2] [3] [4] [5] [6]

[1] 1.00 0.00 0.00 0.00 0.00 0.00[2] 1.00 0.71 0.76 0.64 0.56[3] 1.00 0.67 0.55 0.53[4] 1.00 0.80 0.67[5] 1.00 0.84[6] 1.00

Consensus LASSO (INRA de Toulouse, MIAT) Nathalie Villa-Vialaneix Toulouse, 19 mars 2013 14 / 21

Inference with multiple samples

Multiple networks

Independent estimations: if c = 1, . . . ,C are different samples (or“conditions”, e.g., breeds or before/after diet...)

max(βc

jk )k,j,c=1,...,C

∑c

log MLcj − λ

∑k,j

|βcjk |

.

Joint estimations:

Consensus LASSO (INRA de Toulouse, MIAT) Nathalie Villa-Vialaneix Toulouse, 19 mars 2013 15 / 21

Inference with multiple samples

Multiple networksIndependent estimations: if c = 1, . . . ,C are different samples (or“conditions”, e.g., breeds or before/after diet...)

max(βc

jk )k,j,c=1,...,C

∑c

log MLcj − λ

∑k,j

|βcjk |

.Joint estimations:

Implemented in the R package simone, [Chiquet et al., 2011]

GroupLasso Consensual network between conditions (enforces identicaledges by a group LASSO penalty)

CoopLasso Sign-coherent network between conditions (prevents edgesthat corresponds to partial correlations having differentsigns; thus allows one to obtain a few differences betweenthe conditions)

Intertwined In GLasso replace Σ̂c by 1/2Σ̂c + 1/2Σ where Σ = 1C

∑c Σ̂c

Consensus LASSO (INRA de Toulouse, MIAT) Nathalie Villa-Vialaneix Toulouse, 19 mars 2013 15 / 21

Inference with multiple samples

Consensus LASSO

Proposal: Infer multiple networks by forcing them toward a consensualnetwork.

Add a constraint to force inference toward a consensus βcons:

max(βc

jk )k,j,c=1,...,C

∑c

log MLcj − λ

∑k,j

|βcjk | − µ

∑c

wc‖βcj − β

consj ‖2

Examples:

• βconsj = βc∗

j with c∗ = arg min |βcj | (network intersection);

• βconsj =

∑c

ncn β

cj (“average” network).

Consensus LASSO (INRA de Toulouse, MIAT) Nathalie Villa-Vialaneix Toulouse, 19 mars 2013 16 / 21

Inference with multiple samples

Consensus LASSOProposal: Infer multiple networks by forcing them toward a consensualnetwork.Original optimization:

max(βc

jk )k,j,c=1,...,C

∑c

log MLcj − λ

∑k,j

|βcjk |

.

Add a constraint to force inference toward a consensus βcons:

max(βc

jk )k,j,c=1,...,C

∑c

log MLcj − λ

∑k,j

|βcjk | − µ

∑c

wc‖βcj − β

consj ‖2

Examples:• βcons

j = βc∗j with c∗ = arg min |βc

j | (network intersection);

• βconsj =

∑c

ncn β

cj (“average” network).

Consensus LASSO (INRA de Toulouse, MIAT) Nathalie Villa-Vialaneix Toulouse, 19 mars 2013 16 / 21

Inference with multiple samples

Consensus LASSO

Proposal: Infer multiple networks by forcing them toward a consensualnetwork.Add a constraint to force inference toward a consensus βcons:

max(βc

jk )k,j,c=1,...,C

∑c

log MLcj − λ

∑k,j

|βcjk | − µ

∑c

wc‖βcj − β

consj ‖2

Examples:

• βconsj = βc∗

j with c∗ = arg min |βcj | (network intersection);

• βconsj =

∑c

ncn β

cj (“average” network).

Consensus LASSO (INRA de Toulouse, MIAT) Nathalie Villa-Vialaneix Toulouse, 19 mars 2013 16 / 21

Inference with multiple samples

Consensus LASSO

Proposal: Infer multiple networks by forcing them toward a consensualnetwork.Add a constraint to force inference toward a consensus βcons:

max(βc

jk )k,j,c=1,...,C

∑c

log MLcj − λ

∑k,j

|βcjk | − µ

∑c

wc‖βcj − β

consj ‖2

Examples:

• βconsj = βc∗

j with c∗ = arg min |βcj | (network intersection);

• βconsj =

∑c

ncn β

cj (“average” network).

Consensus LASSO (INRA de Toulouse, MIAT) Nathalie Villa-Vialaneix Toulouse, 19 mars 2013 16 / 21

Inference with multiple samples

In practice...

βconsj =

∑c

ncn β

cj is a good choice because:

•∂βcons

j

∂βcj

exists;

• thus, solving the optimization problem is equivalent to maximizing

12βT

j Sj(µ)βj + βTj Σ̂j\j + λ

∑c

1nc‖βc

j ‖1

with Σ̂j\j , the jth row of empirical covariance matrix deprived from itsjth column and Sj(µ) = Σ̂j\j + 2µAT A where Σ̂j\j is the empiricalcovariance matrix deprived from its jth row and column and A is amatrix that does not depend on j.

This is a standard LASSO problem that can be solved using asub-gradient method (as described in [Chiquet et al., 2011] and alreadyimplemented in the beta-R-package therese).

Consensus LASSO (INRA de Toulouse, MIAT) Nathalie Villa-Vialaneix Toulouse, 19 mars 2013 17 / 21

Inference with multiple samples

In practice...

βconsj =

∑c

ncn β

cj is a good choice because:

•∂βcons

j

∂βcj

exists;

• thus, solving the optimization problem is equivalent to maximizing

12βT

j Sj(µ)βj + βTj Σ̂j\j + λ

∑c

1nc‖βc

j ‖1

with Σ̂j\j , the jth row of empirical covariance matrix deprived from itsjth column and Sj(µ) = Σ̂j\j + 2µAT A where Σ̂j\j is the empiricalcovariance matrix deprived from its jth row and column and A is amatrix that does not depend on j.

This is a standard LASSO problem that can be solved using asub-gradient method (as described in [Chiquet et al., 2011] and alreadyimplemented in the beta-R-package therese).

Consensus LASSO (INRA de Toulouse, MIAT) Nathalie Villa-Vialaneix Toulouse, 19 mars 2013 17 / 21

Illustration

Outline

1 Overview on network inference

2 Graphical Gaussian Models

3 Inference with multiple samples

4 Illustration

Consensus LASSO (INRA de Toulouse, MIAT) Nathalie Villa-Vialaneix Toulouse, 19 mars 2013 18 / 21

Illustration

Datasets description

“DeLiSus” dataset

• variables: expression of 26 genes (selected by Laurence)

• conditions: two breeds (33 “Landrace” and 51 “Large white”; 84 pigs)

Methodology• package GeneNet: networks are estimated independently by a GGM

approach (edges selected based on the p-value in a Bayesian test);

• consensus LASSO: µ fixed and λ varied on a regularization path.Selection of an instance of the path based on the number of edges(similar than with GeneNet).

Consensus LASSO (INRA de Toulouse, MIAT) Nathalie Villa-Vialaneix Toulouse, 19 mars 2013 19 / 21

Illustration

Datasets description

“DeLiSus” dataset

• variables: expression of 26 genes (selected by Laurence)

• conditions: two breeds (33 “Landrace” and 51 “Large white”; 84 pigs)

Methodology• package GeneNet: networks are estimated independently by a GGM

approach (edges selected based on the p-value in a Bayesian test);

• consensus LASSO: µ fixed and λ varied on a regularization path.Selection of an instance of the path based on the number of edges(similar than with GeneNet).

Consensus LASSO (INRA de Toulouse, MIAT) Nathalie Villa-Vialaneix Toulouse, 19 mars 2013 19 / 21

Illustration

ResultsPackage GeneNet

Consensus LASSO

Consensus LASSO (INRA de Toulouse, MIAT) Nathalie Villa-Vialaneix Toulouse, 19 mars 2013 20 / 21

Illustration

ResultsPackage simone (intertwined)

ConsensusLASSO

Consensus LASSO (INRA de Toulouse, MIAT) Nathalie Villa-Vialaneix Toulouse, 19 mars 2013 20 / 21

Illustration

ResultsConsensus LASSO

Consensus LASSO (INRA de Toulouse, MIAT) Nathalie Villa-Vialaneix Toulouse, 19 mars 2013 20 / 21

Illustration

Conclusion

... much left to do:

• biological validation,

• selecting λ (AIC and BIC are way too restrictive...),

• tuning µ,

• other comparisons...

Consensus LASSO (INRA de Toulouse, MIAT) Nathalie Villa-Vialaneix Toulouse, 19 mars 2013 21 / 21

Illustration

ReferencesButte, A. and Kohane, I. (1999).Unsupervised knowledge discovery in medical databases using relevance networks.In Proceedings of the AMIA Symposium, pages 711–715.

Chiquet, J., Grandvalet, Y., and Ambroise, C. (2011).Inferring multiple graphical structures.Statistics and Computing, 21(4):537–553.

Friedman, J., Hastie, T., and Tibshirani, R. (2008).Sparse inverse covariance estimation with the graphical lasso.Biostatistics, 9(3):432–441.

Huynh-Thu, V., Irrthum, A., Wehenkel, L., and Geurts, P. (2010).Inferring regulatory networks from expression data using tree-based methods.PLoS ONE, 5(9):e12776.

Meinshausen, N. and Bühlmann, P. (2006).High dimensional graphs and variable selection with the lasso.Annals of Statistic, 34(3):1436–1462.

Meyer, P., Lafitte, F., and Bontempi, G. (2008).minet: A R/Bioconductor package for inferring large transcriptional networks using mutual information.BMC Bioinformatics, 9(461).

Pearl, J. (1998).Probabilistic reasoning in intelligent systems: networks of plausible inference.Morgan Kaufmann, San Francisco, California, USA.

Pearl, J. and Russel, S. (2002).Bayesian Networks.Bradford Books (MIT Press), Cambridge, Massachussets, USA.

Schäfer, J. and Strimmer, K. (2005).

Consensus LASSO (INRA de Toulouse, MIAT) Nathalie Villa-Vialaneix Toulouse, 19 mars 2013 21 / 21

Illustration

An empirical bayes approach to inferring large-scale gene association networks.Bioinformatics, 21(6):754–764.

Scutari, M. (2010).Learning Bayesian networks with the bnlearn R package.Journal of Statistical Software, 35(3):1–22.

Consensus LASSO (INRA de Toulouse, MIAT) Nathalie Villa-Vialaneix Toulouse, 19 mars 2013 21 / 21