Linear-Space Alignment

30
Linear-Space Alignment

description

Linear-Space Alignment. Linear-space alignment. Using 2 columns of space, we can compute for k = 1…M, F(M/2, k), F r (M/2, N – k) PLUS the backpointers. x 1. …. x M/2. x M. x 1. …. x M/2+1. x M. y 1. y 1. …. …. y N. y N. Linear-space alignment. - PowerPoint PPT Presentation

Transcript of Linear-Space Alignment

Page 1: Linear-Space Alignment

Linear-Space Alignment

Page 2: Linear-Space Alignment

Linear-space alignment

• Using 2 columns of space, we can computefor k = 1…M, F(M/2, k), Fr(M/2, N – k)

PLUS the backpointers

x1 … xM/2

y1

xM

yN

x1 … xM/2+1 xM

y1

yN

Page 3: Linear-Space Alignment

Linear-space alignment

• Now, we can find k* maximizing F(M/2, k) + Fr(M/2, N-k)• Also, we can trace the path exiting column M/2 from k*

k*

k*+1

0 1 …… M/2 M/2+1 …… M M+1

Page 4: Linear-Space Alignment

Linear-space alignment

• Iterate this procedure to the left and right!

N-k*

M/2M/2

k*

Page 5: Linear-Space Alignment

Linear-space alignment

Hirschberg’s Linear-space algorithm:

MEMALIGN(l, l’, r, r’): (aligns xl…xl’ with yr…yr’)

1. Let h = (l’-l)/22. Find (in Time O((l’ – l) (r’ – r)), Space O(r’ – r))

the optimal path, Lh, entering column h – 1, exiting column hLet k1 = pos’n at column h – 2 where Lh enters

k2 = pos’n at column h + 1 where Lh exits

3. MEMALIGN(l, h – 2, r, k1)

4. Output Lh

5. MEMALIGN(h + 1, l’, k2, r’)

Top level call: MEMALIGN(1, M, 1, N)

Page 6: Linear-Space Alignment

Linear-space alignment

Time, Space analysis of Hirschberg’s algorithm: To compute optimal path at middle column,

For box of size M N,Space: 2NTime: cMN, for some constant c

Then, left, right calls cost c( M/2 k* + M/2 (N – k*) ) = cMN/2

All recursive calls cost Total Time: cMN + cMN/2 + cMN/4 + ….. = 2cMN = O(MN)

Total Space: O(N) for computation, O(N + M) to store the optimal alignment

Page 7: Linear-Space Alignment

Heuristic Local Alignerers

1. The basic indexing & extension technique

2. Indexing: techniques to improve sensitivityPairs of Words, Patterns

3. Systems for local alignment

Page 8: Linear-Space Alignment

Indexing-based local alignment

Dictionary:All words of length k (~10)Alignment initiated between words of alignment score T

(typically T = k)

Alignment:Ungapped extensions until score

below statistical threshold

Output:All local alignments with score

> statistical threshold

……

……

query

DB

query

scan

Page 9: Linear-Space Alignment

Indexing-based local alignment—Extensions

A C G A A G T A A G G T C C A G T

C

T G

A

T

C C

T

G

G

A T

T

G C

G

A

Gapped extensions until threshold

• Extensions with gaps until score < C below best score so far

Output:

GTAAGGTCCAGTGTTAGGTC-AGT

Page 10: Linear-Space Alignment

Sensitivity-Speed Tradeoff

long words(k = 15)

short words(k = 7)

Sensitivity Speed

Kent WJ, Genome Research 2002

Sens.

Speed

X%

Page 11: Linear-Space Alignment

Sensitivity-Speed Tradeoff

Methods to improve sensitivity/speed

1. Using pairs of words

2. Using inexact words

3. Patterns—non consecutive positions

……ATAACGGACGACTGATTACACTGATTCTTAC……

……GGCACGGACCAGTGACTACTCTGATTCCCAG……

……ATAACGGACGACTGATTACACTGATTCTTAC……

……GGCGCCGACGAGTGATTACACAGATTGCCAG……

TTTGATTACACAGAT T G TT CAC G

Page 12: Linear-Space Alignment

Measured improvement

Kent WJ, Genome Research 2002

Page 13: Linear-Space Alignment

Non-consecutive words—Patterns

Patterns increase the likelihood of at least one match within a long conserved region

3 common

5 common

7 common

Consecutive Positions Non-Consecutive Positions

6 common

On a 100-long 70% conserved region: Consecutive Non-consecutive

Expected # hits: 1.07 0.97Prob[at least one hit]: 0.30 0.47

Page 14: Linear-Space Alignment

Advantage of Patterns

11 positions11 positions

10 positions

Page 15: Linear-Space Alignment

Multiple patterns

• K patterns Takes K times longer to scan Patterns can complement one another

• Computational problem: Given: a model (prob distribution) for homology between two regions Find: best set of K patterns that maximizes Prob(at least one match)

TTTGATTACACAGAT T G TT CAC G T G T C CAG TTGATT A G

Buhler et al. RECOMB 2003Sun & Buhler RECOMB 2004

How long does it take to search the query?

Page 16: Linear-Space Alignment

Variants of BLAST

• NCBI BLAST: search the universe http://www.ncbi.nlm.nih.gov/BLAST/• MEGABLAST: http://genopole.toulouse.inra.fr/blast/megablast.html

Optimized to align very similar sequences• Works best when k = 4i 16• Linear gap penalty

• WU-BLAST: (Wash U BLAST) http://blast.wustl.edu/ Very good optimizations Good set of features & command line arguments

• BLAT http://genome.ucsc.edu/cgi-bin/hgBlat Faster, less sensitive than BLAST Good for aligning huge numbers of queries

• CHAOS http://www.cs.berkeley.edu/~brudno/chaos Uses inexact k-mers, sensitive

• PatternHunter http://www.bioinformaticssolutions.com/products/ph/index.php Uses patterns instead of k-mers

• BlastZ http://www.psc.edu/general/software/packages/blastz/ Uses patterns, good for finding genes

• Typhon http://typhon.stanford.edu Uses multiple alignments to improve sensitivity/speed tradeoff

Page 17: Linear-Space Alignment

Hidden Markov Models1

2

K

1

2

K

1

2

K

1

2

K

x1 x2 x3 xK

2

1

K

2

Page 18: Linear-Space Alignment

Outline for our next topic

• Hidden Markov models – the theory

• Probabilistic interpretation of alignments using HMMs

Later in the course:

• Applications of HMMs to biological sequence modeling and discovery of features such as genes

Page 19: Linear-Space Alignment

Example: The Dishonest Casino

A casino has two dice:• Fair die

P(1) = P(2) = P(3) = P(5) = P(6) = 1/6• Loaded die

P(1) = P(2) = P(3) = P(5) = 1/10P(6) = 1/2

Casino player switches back-&-forth between fair and loaded die once every 20 turns

Game:1. You bet $12. You roll (always with a fair die)3. Casino player rolls (maybe with fair die,

maybe with loaded die)4. Highest number wins $2

Page 20: Linear-Space Alignment

Question # 1 – Evaluation

GIVEN

A sequence of rolls by the casino player

1245526462146146136136661664661636616366163616515615115146123562344

QUESTION

How likely is this sequence, given our model of how the casino works?

This is the EVALUATION problem in HMMs

Prob = 1.3 x 10-35

Page 21: Linear-Space Alignment

Question # 2 – Decoding

GIVEN

A sequence of rolls by the casino player

1245526462146146136136661664661636616366163616515615115146123562344

QUESTION

What portion of the sequence was generated with the fair die, and what portion with the loaded die?

This is the DECODING question in HMMs

FAIR LOADED FAIR

Page 22: Linear-Space Alignment

Question # 3 – Learning

GIVEN

A sequence of rolls by the casino player

1245526462146146136136661664661636616366163616515615115146123562344

QUESTION

How “loaded” is the loaded die? How “fair” is the fair die? How often does the casino player change from fair to loaded, and back?

This is the LEARNING question in HMMs

Prob(6) = 64%

Page 23: Linear-Space Alignment

The dishonest casino model

FAIR LOADED

0.05

0.05

0.950.95

P(1|F) = 1/6P(2|F) = 1/6P(3|F) = 1/6P(4|F) = 1/6P(5|F) = 1/6P(6|F) = 1/6

P(1|L) = 1/10P(2|L) = 1/10P(3|L) = 1/10P(4|L) = 1/10P(5|L) = 1/10P(6|L) = 1/2

Page 24: Linear-Space Alignment

The dishonest casino model

FAIR LOADED

0.05

0.05

0.950.95

P(1|F) = 1/6P(2|F) = 1/6P(3|F) = 1/6P(4|F) = 1/6P(5|F) = 1/6P(6|F) = 1/6

P(1|L) = 1/10P(2|L) = 1/10P(3|L) = 1/10P(4|L) = 1/10P(5|L) = 1/10P(6|L) = 1/2

Page 25: Linear-Space Alignment

A HMM is memory-less

At each time step t, the only thing that affects future states is the current state t

K

1

2

Page 26: Linear-Space Alignment

Definition of a hidden Markov model

Definition: A hidden Markov model (HMM)• Alphabet = { b1, b2, …, bM }• Set of states Q = { 1, ..., K }• Transition probabilities between any two states

aij = transition prob from state i to state jai1 + … + aiK = 1, for all states i = 1…K

• Start probabilities a0i

a01 + … + a0K = 1

• Emission probabilities within each state

ei(b) = P( xi = b | i = k)ei(b1) + … + ei(bM) = 1, for all states i = 1…K

K

1

2

End Probabilities ai0

in Durbin; not needed

Page 27: Linear-Space Alignment

A HMM is memory-less

At each time step t, the only thing that affects future states is the current state t

P(t+1 = k | “whatever happened so far”) =P(t+1 = k | 1, 2, …, t, x1, x2, …, xt) =P(t+1 = k | t)

K

1

2

Page 28: Linear-Space Alignment

A HMM is memory-less

At each time step t, the only thing that affects xt is the current state t

P(xt = b | “whatever happened so far”) =P(xt = b | 1, 2, …, t, x1, x2, …, xt-1) =P(xt = b | t)

K

1

2

Page 29: Linear-Space Alignment

A parse of a sequence

Given a sequence x = x1……xN,A parse of x is a sequence of states = 1, ……, N

1

2

K

1

2

K

1

2

K

1

2

K

x1 x2 x3 xK

2

1

K

2

Page 30: Linear-Space Alignment

Generating a sequence by the model

Given a HMM, we can generate a sequence of length n as follows:

1. Start at state 1 according to prob a01 2. Emit letter x1 according to prob e1(x1)3. Go to state 2 according to prob a12

4. … until emitting xn

1

2

K…

1

2

K…

1

2

K…

1

2

K…

x1 x2 x3 xn

2

1

K

20

e2(x1)

a02