image processing paper.pdf

download image processing paper.pdf

of 26

Transcript of image processing paper.pdf

  • 7/27/2019 image processing paper.pdf

    1/26

    DFGSchwerpunktprogramm 1114Mathematical methods for time series analysis and digital image

    processing

    Order patterns in time series

    Christoph Bandt Faten Shiha

    Preprint 106

    Preprint Series DFG-SPP 1114

    Preprint 106 April 2005

  • 7/27/2019 image processing paper.pdf

    2/26

    The consecutive numbering of the publications is determined by their chrono-

    logical order.

    The aim of this preprint series is to make new research rapidly available for

    scientific discussion. Therefore, the responsibility for the contents is solely due

    to the authors. The publications will be distributed by the authors.

  • 7/27/2019 image processing paper.pdf

    3/26

    Order Patterns in Time Series

    Christoph Bandt and Faten Shiha

    March 2005

    Abstract

    In an attempt to find fast and robust methods for data mining, westudy time series by counting order patterns of 2, 3 or 4 equally spacedvalues. The statistics for stationary Gaussian processes is given. Thepattern distribution for some well-known empirical time series deviatesstrongly from any Gaussian model. Applications include descriptive

    methods, similar to autocorrelation, as well as tests for symmetryproperties like reversibility.

    1 Introduction

    In applied statistics, the simplest methods are often the best. Moreover, inrecent years the routine study of huge multivariate data sets, data mining,requires algorithms which work extremely fast. Here we present some veryelementary methods for the analysis of a univariate time series ( xt)t=1,...,Twhich can be generalized to the multivariate setting. We do not use the

    actual values xt. We only need to know whether xs < xt or xs > xt. We shallassume that the underlying variables Xt have a continuous distribution sothat ties xs = xt are very rare.

    In other words, we study time series on ordinal level. This means weforget a lot of structure. All standard methods of time series analysis, basedon vector spaces and additive decomposition of processes, are not available inour setting. Probably this is the reason that, with exception of a number ofpapers by Marc Hallin and co-authors on rank correlation [11, 12, 15], a fewcase studies like those of Brillinger [7, 8], and our recent work [4, 3] ordinalmethods have rarely been used for time series although they are very commonin elementary statistics. On the other hand, methods based on the topological

    structure of time series instead of the vector space structure are very robustunder noise and non-linear perturbation. We also need not assume muchstationarity of the underlying process, in particular if we compare only valuesxs, xt for small |s t|. So let us start now with the simplest case.

    1

  • 7/27/2019 image processing paper.pdf

    4/26

    2 Counting up and down

    First examples

    We consider the time series (x1,...,xT) and a delay parameter d {1, 2, 3,...}.Now we determine the relative frequency of time points t for which xt < xt+d.

    p(d) =card{t| 1 t T d, xt < xt+d}

    T dAt first glance, one might expect that this value is always 1

    2. However, for the

    well-known monthly sunspot series and annual Canadian lynx series [20] thefunction p(d) looks curious (Figure 1). It oscillates between 0.42 and 0.64,clearly indicating the 11-years (132 months) periodicity of sunspots and the10-years cycle of lynx.

    Note that the sunspots have 3046 values and the lynx only 114. Let usconsider each series as a sample from a stationary process and each p(d)

    as an estimate, approximated by a value from a binomial distribution withn = T d, p 12 . Then the standard deviation of a single p(d) becomes 0.01for the sunspots and 0.05 for the lynx. This agrees with the fluctuationsof consecutive values in Figure 1. Thus for the short lynx series, singlevalues p(d) do not differ too significantly from 1

    2but the whole structure is

    meaningful.

    0 500 1000 d0.4

    0.5

    0.6

    p(d)

    0 20 40 d0.4

    0.5

    0.6

    p(d)

    0 20 40 d0.4

    0.5

    0.6

    p(d)

    Figure 1. The function p(d) for the monthly sunspot series 1749-2003 andthe Canadian lynx series [20]

    If x(t) is a function of continuous time, 0 t T, then let d assumereal values between 0 and T, and let us define p(d) with the Lebesgue lengthmeasure .

    p(d) = ({t| 0 t T d, xt < xt+d})T dExample 2.1. Consider a sawtooth function with period one: x(t) = t for0 t a, and x(t) = a

    1a(1 t) for a t 1. Extend x(t) periodically,

    2

  • 7/27/2019 image processing paper.pdf

    5/26

    x(t + m) = x(t) for any integer m. Then

    p(d) = d + a(1 2d) for 0 < d < 1

    a

    a 1 0 a 1

    .................

    .................

    .................

    .................

    .................

    ................................................................................................................................................................................

    .................

    ..................

    .................

    .................

    ..................

    ..................

    .................

    .................

    ..................

    ..................

    .......................................................................................................................................................................................

    .................

    .................

    .................

    .................

    ...........

    ......

    .......

    ......

    .......

    ......

    .......

    ......

    .......

    ......

    .......

    ......

    .......

    ......

    .......

    ......

    .......

    ......

    ......

    .......

    ......

    .......

    ......

    .......

    ......

    ..

    ......

    .......

    ......

    .......

    ......

    .......

    ......

    .......

    ......

    .......

    ......

    .......

    ......

    .......

    ......

    .......

    ......

    ......

    .......

    ......

    .......

    ......

    .......

    ......

    ......................................................................................

    .................................

    .................................

    .................................

    x(t) > x(t + d)

    d

    d d

    t

    -

    6x

    a

    1 a

    0 1

    .............................................................................................................................................................................................................

    .....................................................................

    ........

    ........

    ........

    ........

    ........

    ........

    ........

    ........

    ........

    ........

    .

    d

    -

    6p(d)

    Figure 2. Sawtooth function and its p(d) for a = 23

    See Figure 2. Instead of considering large T and the limit T , wetake T = 1 and all t

    [0, 1], calculating t + d modulo 1 for t > T

    d. Then

    x(t) > x(t + d) if and only if a(1 d) < t < 1 d(1 a)which proves the formula for p(d).

    Of course, p(d) also has period one, p(d + 1) = p(d), and p(0) = 0.It might be better to say that p is not defined for integers d = m sincex(t + m) = x(t) for all t. Actually, it is not possible to define p(0) so that pbecomes continuous at 0 because p(0+) = limd0+ p(d) = ({t| x(t) > 0})/Tand p(0) = ({t| x(t) < 0})/T = 1 p(0+). This argument holds forall piecewise differentiable functions x(t). Thus for continuous time p(d) isdiscontinuous at 0 and will be considered only for d > 0. Ifx is periodic with

    period s, we take 0 < d < s.Let us say that a function, a time series or an underlying stochastic

    process is balanced if p(d) = 12 for all d. The sawtooth function is balancedonly for the symmetric case a = 12 .

    Gaussian processes versus chaotic time series

    We want to give an argument which shows that series from Gaussian pro-cesses are balanced while series derived from dynamical systems are usuallynot. For a stationary process (Xt)tT the value p(d) is determined by thetwo-dimensional distribution of (Xt, Xt+d) where t does not matter.

    Proposition 2.2. Stationary Gaussian processes are balanced.A process with stationary increments is balanced if the median of incrementsover any time span d is 0 and P(Xt+d = Xt) = 0.

    3

  • 7/27/2019 image processing paper.pdf

    6/26

  • 7/27/2019 image processing paper.pdf

    7/26

    far from 12 , but it seems unlikely that it is exactly12

    . Why should a chaoticattractor adapt to the two triangles?

    Some other examples

    Example 2.4. In Figure 4 we study p(d) by simulation of AR processeswith non-Gaussian noise. We take Xt = 0.9Xt1 + t and Xt = 1.8Xt1 0.9Xt2 + t where the white noise t is exponentially distributed. In theAR(1) process the exponential distribution causes p(d) to be essentially below12

    up to d = 20 (to get the smooth appearance, we took the average of 500functions p(d) with T = 5000). For the AR(2) process the deviations from 1

    2

    are smaller, and the periodicity of the process is indicated.

    0 10 20 30 d

    0.4

    0.5

    p(d)

    0 10 20 30 d

    0.4

    0.5

    p(d)

    0 10 20 30 d0.49

    0.5

    0.51

    p(d)

    Figure 4. The function p(d) for an AR processes with exponential noise.Left: Xt = 0.9Xt1 + t. Right: Xt = 1.8Xt1 0.9Xt2 + t.

    Let us come back to periodic functions, like the sawtooth example above.Can we find an example with sinusoid p(d), as for the sunspots? The sine

    function itself is balanced since sin(t + d) sin t = 2sind2 cos(t +

    d2). But

    there are very simple analytic periodic functions which are not balanced.

    Example 2.5. Let x(t) = sin t + 12 sin2t (Figure 5). The graph ofp(d) is S-shaped, point symmetric with respect to (, 1

    2), and it must have

    a discontinuity at t = 2. However, this discontinuity disappears if we addto x(t) a little Gaussian noise. Then p(d) looks almost like a sine, as for thesunspots. If stronger noise is added, p(d) gets a flat part, as x(t) itself, andbecomes asymmetric with respect to 12 , with a range from 0.42 to 0.54.

    5

  • 7/27/2019 image processing paper.pdf

    8/26

    1 3 5 t

    1

    0

    1

    x(t)

    (a)

    0 2 4 6 d

    0.4

    0.5

    0.6

    p(d)

    (b)

    0 2 4 6 d

    0.4

    0.5

    0.6

    p(d)

    (c)

    0 2 4 6 d

    0.45

    0.5

    p(d)

    (d)

    Figure 5. (a) The function x(t) = sin t + 12 sin2t. (b) p(d) for this function.(c),(d) p(d) for the disturbed function x(t) + c t where t is standard

    Gaussian white noise, c = 0.07 and c = 0.7.

    Tests

    Many tests have been suggested to control various properties of the under-lying process of a given time series, as for example Gaussian distribution,linearity, and reversibility [20]. A stationary process is called reversible if(Xt1 , Xt2 ,...,Xtk) has the same distribution as (Xtt1 , Xtt2 ,...,Xttk), forarbitrary t, t1,...,tk. Obviously, such a process must be balanced. Since

    p(d) = 12 is necessary for being Gaussian as well as reversible, p(d) canbe used as test statistics for checking these properties.

    Test for balance. Under the null hypothesis P(d) = 12

    for theunderlying process, and rather weak independence requirements, the value(T d)p(d) for the time series has a binomial distribution with n = T dand p = 12 . Its significance for each d can easily be checked. Nevertheless,we rather recommend drawing the function p(d) as a visual tool for decidingGaussian distribution and reversibility. It gives only one particular param-eter of the two-dimensional distributions, but it gives this parameter for alldelays d within one figure.

    Test for trend. When p(d) increases with d, we can expect thatour time series has an increasing trend. This can be seen for the sunspotsin Figure 1. Thus to check for trend, we can do a linear regression with

    6

  • 7/27/2019 image processing paper.pdf

    9/26

    p(d) instead of xt, or we can compare the p(d), d = 1, ...,T4 with p(d), d =

    T4 + 1,...,

    T2

    , for instance by a Mann-Whitney test.

    0 100 200 300 400 d0.5

    0.7

    0.9

    p(d)linear trend

    random walk

    Figure 6. p(d) for two simple trend models, b = 0.02

    Example 2.6. To see how p(d) describes trends, consider two simplemodels (Figure 6). The linear trend model

    Xt = a + bt + t implies p(d) = P(t t+d < bd) .For uniform noise on [1, 1], the density of t t+d is symmetric to zero,and (z) = (2

    z)/4 for 0

    z

    2. Thus p(d) = 12 +

    bd8 (4

    bd) for 0

    bd

    2,

    and p(d) = 1 for bd > 2. Thus p(d) is almost linear between (0, 12) and (2b , 1),and it is not much different for Gaussian noise. The random walk with drift

    Xt = Xt1 + b + t gives p(d) = P(d

    k=1

    t+k + bd > 0) = (b

    d)

    when t is standard Gaussian noise. The increase of p(d) is much slower.

    0 200 400 d

    0.3

    0.5

    0.7

    p(d)

    1

    2

    3

    Figure 7. p(d) for three parts of the sunspot series1: 1749-1832, 2: 1833-1916, 3: 1917-2000

    Checking stationarity. Calculating p(d) for different parts of a suffi-ciently long series we can look for changes in the behavior of the time series.For Figure 7 the sunspot series was divided into three equal parts. While the

    periodicity remains stable, it can be seen that the trend which was visiblein the whole series (Figure 1) is only due to the increased values within the20th century (part 3). An improved method was used by A. Groth [13] tostudy changes in an EEG, caused for instance by epileptic seizure. There

    7

  • 7/27/2019 image processing paper.pdf

    10/26

    are other properties, like self-similarity or long-term dependence of the un-derlying process, which can also be checked with p(d). Of course, p(d) onlydescribes one tiny detail necessary for these properties, but this detail is easyto extract and evaluate even from short time series.

    3 Order patterns of length 3

    Definition

    Now let us consider three equidistant time points. The corresponding valuesxt, xt+d and xt+2d can form the six order patterns shown in Figure 8. The mostintuitive way to denote these patterns is by the sequence of rank numbers (1denotes the minimum and 3 the maximum of the values). For instance, therelation xt+d < xt+2d < xt describes the order pattern = 312 since xt isthe largest, xt+d the smallest, and xt+2d the second of the three values. Thisnotation applies to any order pattern of n values xt, xt+d,...,xt+(n1)d.

    ................................................................................................................................................................

    ................................................................................................................................................................................................................................... .......................................................................................

    ............................................................................................................................................ ..........................................................................................................................................................

    ......................................................................... .....

    ..............................................................................................................................................................................................................................

    ................................................................................................................................................................

    123 132 213 312 231 321

    Figure 8. The six order patterns of length 3.

    We define the relative frequency p(d) of an order pattern in (xt)t=1,...,T

    ascard{t| 1 t T (n 1)d, (xt, xt+d,...,xt+(n1)d) forms order pattern }

    T (n 1)dOur p(d) is the special case n = 2 and = 12. As in the previous section,we can define p(d) also for functions x(t) in continuous time, t [0, T],replacing the cardinality of the finite set by the length measure of timepoints. For a stationary process, we can work with the distribution of(Xt, Xt+d,...,Xt+(n1)d) for a fixed t. Actually, it is enough to know the signsof increments. In this section, we consider the case n = 3. Thus for a processwith stationary increments, we need only the two-dimensional distribution

    of (Xt+d Xt, Xt+2d Xt+d) for a fixed t.

    8

  • 7/27/2019 image processing paper.pdf

    11/26

    Basic properties

    There are some obvious relations between the frequencies of order patterns.Let us write p for p(d) if no confusion is possible.

    Proposition 3.1. For a process with stationary increments, and all d

    a) p12 = p123 + p132 + p231 = p123 + p213 + p312

    b) p132 + p231 = p213 + p312

    c) p123 p321 = 2p12 1d) p12(2d) = p123(d) + p132(d) + p213(d)

    e) p12(d) p12(2d) = p231(d) p213(d) = p312(d) p132(d)

    For an arbitrary time series with T values, d) holds exactly, and the otherequations hold approximately: the difference between both sides of the equa-

    tion is at most dTd in a) and e), dT2d in b), and 2dTd in c).

    Proof. We start with processes. For a) we write p12 as

    P(X(t) < X(t + d)) = P(X(t) < X(t + d) < X(t + 2d)) +

    + P(X(t) < X(t + 2d) < X(t + d)) + P(X(t + 2d) < X(t) < X(t + d)).

    Then we work similarly with p12 = P(X(t + d) < X(t + 2d)), and for d) withp12(2d) = P(X(t) < X(t + 2d)). b) immediately follows from a). For c) weuse a) and the complementary relation p21 = p321 + p213 + p312 to conclude

    p12 p123 = p21 p321. e) is a consequence of a) and d).

    For a time series, the same argument works for d). For a) we note thatp12 is obtained from all t T d while the order patterns of length 3 arecalculated from t T 2d. Thus

    p12(T d) = (p123 + p213 + p312)(T 2d) + y with 0 y d

    The difference between both sides in a) is (x d(p123 +p132 +p231))/(T d),its absolute value is at most d/(T d). This also holds for e), as differenceof a) and d). For c) we take the difference of two such equations, the erroris at most d/(T d). For b) we can use another argument: the numbers oflocal maxima (given by patterns 132 and 231) and of local minima (given bypatterns 213 and 312) within any sequence xt, xt+d, ...xt+kd can differ only by

    one. In our case we take k so that T d < t + kd T, and we sum over allinitial values t with 0 < t d to obtain the bound d/(T2d). All argumentsalso apply to functions x(t) on [0, T].

    9

  • 7/27/2019 image processing paper.pdf

    12/26

    Our error bounds are good for small d, but useless for d = T /4, forinstance. Numerical simulations show, however, that the coincidence in theabove relations is much better than the proven bounds (see Figure 10).

    0 200 d0

    0.2

    p

    123 132

    231 213

    312 321

    Figure 9. The frequencies of patterns of length 3 for the sunspot series

    Examples

    The frequencies of the six order patterns in Figure 8 for the sunspot seriesare shown in Figure 9. It can be seen that the patterns 123 and 321 appear

    more often than the others, but not for all d. Moreover, the patterns 132 and213 as well as 231 and 312 do appear with almost the same frequency, for alld. This could be accidental for some d but not for all of them! It indicates akind of symmetry of the underlying process.

    Since 132 and 213 are obtained from each other by rotation around 180o

    (see Figure 8), we say that the process is rotation symmetric for patternsof length 3 if p132 = p213 for all d. Equation b) above then shows that also

    p231 = p312. For the third pair of rotated patterns, 123 and 321, equalitywould only be possible for those d for which p12(d) =

    12

    , by equation c).

    Proposition 3.1 shows that given p12 there are only two independent num-bers among the six frequencies p for length 3. So the question arises which

    of the p, or combinations thereof, are the most instructive ones. We suggest

    u = p123 + p321 as an index of persistence

    10

  • 7/27/2019 image processing paper.pdf

    13/26

    v = p132 p213 = p312 p231 as index of rotation symmetry.

    Additionally, we may consider w(d) = p12(d)p12(2d) which indicates thedifference of the shapes of maxima and minima, see equation e). The indexu is good for detecting periodicity even in a very noisy time series. u(d) willbe minimal at d = s2 ,

    3s2

    , 5s2

    ,... where s denotes the period. This is easy to

    explain: two successive steps of length s2 in the same direction (both < orboth >) are never possible in the strictly periodic case where x(t) = x(t + s)for all t. On the other hand, u(d) will be large for d near to s, 2s, 3s,... atleast when the periodic function is sufficiently smooth: then if x(t) is on anincreasing branch of the function, this is also true for x(t+s) and x(t+2s), byperiodicity. For d = s with small this implies x(t) > x(t+d) > x(t+2d).For d = s + we get x(t) < x(t + d) < x(t + 2d). But for d = s we would getequality, which a little perturbation will take to one or the other side. Thusu has not a maximum at s, but two maxima left and right of s, and betweenthem a minimum at s. This effect can be seen very well in Figure 10.

    0 66 132 198 264 330 d0

    0.2

    0.4

    u(d)

    1 10 20 30 d

    0.2

    0.4

    0.6

    u(d)

    100 300 d

    0.1

    0

    0.1

    v(d)

    100 300 d

    0.1

    0

    0.1

    w(d)

    Figure 10. Upper panel: u(d) for sunspots and lynx. Lower panel: Differentversions of v(d) (thin: p132 p213, thick: p312 p231) and of w(d) (thick:p12(d) p12(2d), thin: p231 p213, dotted: p312 p132) for the sunspots

    The lynx series is rather smooth since u(1) is the maximum. For the

    sunspots, the first maximum occurs for d between 15 and 20, indicatingthe noisy character of the series. The lower panel of Figure 10 shows thatthe sunspot process is rotation symmetric while the shape of maxima andminima is quite different. The differences between different representations of

    11

  • 7/27/2019 image processing paper.pdf

    14/26

    v and w given by equation b) and e) are surprisingly small. Remember thatan error estimate based on binomial distribution says that the values u(d),considered as estimates for the underlying process, have standard deviation0.1 (cf. comment to Figure 1).

    Gaussian processes

    For a reversible process we have p123 = p321, p132 = p231, and p213 = p312. Ifthe distribution of

    (Z1 = Xt+d Xt , Z2 = Xt+2d Xt+d)is symmetric with respect to its mean (i.e. for mean zero the density fulfils(x, y) = (x, y)) then p123 = p321, p132 = p213, and p231 = p312 (cf.Figure 8).

    Let us consider a process with stationary increments for which (Z1, Z2)has Gaussian distribution, for all d. It fulfils both sets of equations since(y, x) = (x, y) = (x, y) for the density of (Z1, Z2). Such a process isalso rotation symmetric, and is completely characterized by p123 = p321 =

    u2

    .The other patterns have probability 1u

    4. Let us determine p123. The following

    is well-known [2, 17].

    Lemma 3.2. If (Z1, Z2, Z3) has a Gaussian distribution with zero meanand correlation coefficients ij = (Zi, Zj) then

    P(Z1 > 0, Z2 > 0) =1

    4+

    1

    2arcsin 12 ,

    P(Z1 > 0, Z2 > 0, Z3 > 0) =1

    8+

    1

    4(arcsin 12 + arcsin 13 + arcsin 23) .

    For our processes, p123(d) = P(Z1 > 0, Z2 > 0), so only = (Z1, Z2)need to be determined. In the case of a stationary Gaussian process, we canassume E(Xs) = 0 and E(X

    2s ) = 1 for all s, and take t = 0 in Z1, Z2.

    =E((Xd X0)(X2d Xd))

    E((Xd X0)2) =2d 1 2d

    2(1 d)where d = (X0, Xd) is the autocorrelation of the process. Now we use the

    formula arcsin = 2 arcsin

    (1 + )/2/2 and get the following result aftera small calculation.

    Proposition 3.3. For a stationary Gaussian process with autocorrela-tion function d,

    p123(d) = p321(d) =1

    arcsin

    1

    2

    1 2d1 d

    .

    12

  • 7/27/2019 image processing paper.pdf

    15/26

    It is clear that for white noise, or any process with exchangeable three-dimensional distributions, all patterns p of length 3 have the same proba-bility 16 . The reason is that for any vector x1, x2, x3, all permutations of thecoordinates have the same probability to occur in a time series. A corre-sponding remark holds for patterns of order n. With Proposition 3.3 we cananalytically describe p123 for some other processes.

    Corollary 3.4. For a Gaussian AR(1)-process Xt = aXt1 + t we have

    p123(d) =1

    arcsin

    1

    2

    1 + ad

    for all d.

    For a Gaussian AR(2)-process Xt = a1Xt1 + a2Xt2 + t we have

    p123(1) =1

    arcsin

    1

    2

    1 + a1 a2

    .

    For fractional Brownian motion with Hurst parameter H (0, 1),

    p123(d) = 1

    arcsin 2H1 for all d .

    Proof. d = ad for AR(1), and 1 = a1/(1a2), 2 = (a21a22+a2)/(1a2)

    for AR(2) [19] are just inserted into Proposition 3.3. Fractional Brown-ian motion B(t) = BH(t) is a Gaussian process with mean zero, station-ary increments, variance E(B2(t)) = t2H and covariance E(B(s)B(t)) =12(s

    2H + t2H |s t|2H). This implies E((B(t + d) B(t))2) = d2H and

    E((B(t + d) B(t))(B(t + 2d) B(t + d))) = 12

    ((2d)2H 2d2H)

    so that = (Z1, Z2) = 22H1 1. Since B(t) is a self-similar process,

    and also the p do not depend on the value of d. The formula arcsin =

    2 arcsin

    (1 + )/2 /2 together with Lemma 3.2 gives the result.

    For ordinary Brownian motion, H = 12

    , we get p123(d) =14

    , p132(d) =18

    .Moreover, p123(d) 12 for H 1. For H 0, we obtain the case of whitenoise: equal probabilities for all permutations.

    Persistence as autocorrelation

    For a stationary process, we can consider p123(d) as a kind of autocorrelationfunction. Actually, it is a diagonal version of Kendalls tau [3, 6]. Thevalue p123(d) =

    16

    corresponds to d = 0, it will be approached for d if there is no trend or long-range dependence. On the other hand, the

    13

  • 7/27/2019 image processing paper.pdf

    16/26

    maximal value p123(d) =12 should correspond to = 1 and the minimal

    value p123(d) = 0 to = 1. Thus one way to standardize the functionp123(d) is q123(d) = 4(p123(d) 16), with range (23 , 43) near (1, 1). Figure 11b shows the autocorrelation function of the sunspot series, influenced by theupwards trend, compared to q123(d) as well as the standardized persistencefunction 2(u(d)

    1

    3

    ).

    For Gaussian AR(1)-processes Xt = aXt1+t, a < 1, the standardizationq123(d) = 12(p123(d) 16) with range (2, 4) is more appropriate. Using thecorollary we can easily check that there is a perfect coincidence between q123and the autocorrelation function: the difference is less than 0.02 for all a >0.17 and all d. Even for an AR(2)-process like Xt = 1.7Xt1 0.8Xt2 + twhere the corollary implies q123(1) = 2.62, the coincidence shown in Figure11a is remarkable. Thus although the standardization of p123 is not uniquelydetermined, it can compete with classical autocorrelation.

    0 10 20 30 d

    0

    1

    p

    0 50 100 150 d0.5

    0

    0.5

    p

    Figure 11. Autocorrelation and standardized function p123(d) for theGaussian AR(2) process Xt = 1.7Xt1 0.8Xt2 + t (left) and for the

    sunspot series (right, dotted: 2(u(d) 13

    ))

    4 Longer order patterns

    So far, we have considered only the simplest order patterns. There are manyways to extend this work. Even for patterns of length 3, we can take arbitrarydelays d1, d2 instead of d, 2d. Some experiments in this direction were doneby Groth [13].

    For patterns of length 4, determined for vectors (xt, xt+d1 , xt+d2 , xt+d3 ) thechoice d1 = d, d2 = k and d3 = d + k leads to Kendalls tau an interestingautocorrelation function [3, 6]. Here k is the delay parameter of autocorrela-tion while d is another scale parameter. The tau-autocorrelation of Ferguson,

    Genest and Hallin [11] is a sum over all d while we prefer to have d as a freeparameter which can be chosen in an appropriate way.

    Some results of the previous section, like Proposition 3.1, can be extended

    14

  • 7/27/2019 image processing paper.pdf

    17/26

    to equidistant patterns of length 4, dj = jd. In particular, we can determinethe probabilities p for Gaussian processes, using the second part of Lemma3.2. Because of the symmetries of normal distributions, there are eight classesof permutations:

    Theorem 4.1. For a stationary Gaussian process and arbitrary d > 0,

    p1234 = p4321 =1

    8+

    1

    4(arcsin 1 + 2 arcsin 2) ,

    p3142 = p2413 =1

    8+

    1

    4(2 arcsin 3 + arcsin 4) ,

    p4231 = p1324 =1

    8+

    1

    4(arcsin 4 2 arcsin 5) ,

    p2143 = p3412 =1

    8+

    1

    4(2 arcsin 6 + arcsin 1) ,

    p1243 = p2134 = p3421 = p4312 =1

    8+

    1

    4(arcsin 7 arcsin 1 arcsin 5) ,

    p1423 = p4132 = p3241 = p2314 =

    1

    8 +

    1

    4 (arcsin 7 arcsin 4 arcsin 5) ,p3124 = p1342 = p4213 = p2431 =

    1

    8+

    1

    4(arcsin 3 + arcsin 8 arcsin 5) ,

    p1432 = p4123 = p2341 = p3214 =1

    8+

    1

    4(arcsin 6 arcsin 8 + arcsin 2) ,

    where

    1 =22d d 3d

    2(1 d) , 2 =2d 2d 1

    2(1 d) , 3 =2d + 3d d 1

    2

    (1 2d)(1 3d),

    4 =d 3d

    2(1 2d), 5 =

    1

    2

    1 2d

    1 d, 6 =

    d + 3d 2d 1

    2

    (1 d)(1 3d),

    7 =d + 2d 3d 1

    2

    (1 d)(1 2d), 8 =

    d 2d(1 d)(1 3d)

    .

    Details are given in [18]. This theorem looks similar for patterns withdelays d1, d2, d3. However, analytical expressions as above seem not to existfor patterns of order 5 since there is no appropriate generalization ofLemma 3.2 [2, 17].

    For fractional Brownian motion with Hurst parameter H, we have thesame formula as in the first part of the above theorem, but now with

    1 = (1 + 32H 22H+1)/2, 2 = 22H1 1, 3 = (1 32H 22H)/(2 6H),

    4 = (32H 1)/22H+1), 5 = 2H1, 6 = (22H 32H 1)/(2 3H),

    15

  • 7/27/2019 image processing paper.pdf

    18/26

    7 = (32H 22H 1)/(2H+1), 8 = (22H 1)/3H.

    In particular, ordinary Brownian motion fulfils p1234 = p4321 =18 and p1243 =

    p2134 = p3421 = p4312 =116

    , and all other patterns have smaller probability.

    For length 4 we have not specified reasonable parameters, or groups ofpatterns, like the indices of persistence and rotation invariance for length 3.Numerical experiments have shown, however, that the study of individualorder patterns can be useful even for order 4. For instance, the rotationsymmetry for the sunspot series can be verified for all order 4 patterns whichseems rather curious.

    One way to deal with the distribution of all patterns together is thepermutation entropy introduced in [4]. This is just the Shannon entropy ofthe n! probabilities p of order patterns for any fixed length n. In numericalstudies of chaotic time series, we can go with n up to 16 [4], and theoreticallywe can go with n to infinity [5]. For empirical time series of moderate length(500 or less), lengths up to n = 5 make sense, as was verified for speechsignals in [4] and for EEG data in [9, 16]. Some other approaches to ordinaltime series were sketched in [3].

    We presented a starting point. A lot of work is waiting ahead.

    References

    [1] H.D.I. Abarbanel, Analysis of observed chaotic data, Springer, New York1995

    [2] R.H. Bacon, Approximation to multivariate normal orthant probabili-ties, Ann. Math. Statist. 34 (1963), 191-198

    [3] C. Bandt, Ordinal time series analysis, Ecological Modelling 182 (2005),229-238

    [4] C. Bandt and B. Pompe, Permutation entropy: a natural complexitymeasure for time series, Phys. Rev. Lett. 88 (2002), 174102

    [5] C. Bandt, G. Keller and B. Pompe, Entropy of interval maps via per-mutations, Nonlinearity 15 (2002), 1595-1602

    [6] C. Bandt, S. Laude and H. Lauffer, Kendalls tau as autocorrelation,Preprint, Greifswald 2000

    [7] D. Brillinger, An analysis of an ordinal-valued time series, Lecture Notesin Statistics 115, 73-87, Springer 1996

    16

  • 7/27/2019 image processing paper.pdf

    19/26

    [8] D. Brillinger et al, Automatic methods for generating seismic intensitymaps, J. Appl. Probab. 38A (2001), 188-201

    [9] Y. Cao, W. Tung, J.B. Gao, V.A. Protopopescu and L.M. Hively, De-tecting dynamical changes in time series using the permutation entropy,Phys. Rev. E 70 (2004), 046217

    [10] P. Collet and J.-P. Eckmann, Iterated maps on the interval as dynamicalsystems, Birkhauser, Basel 1980

    [11] S.T. Ferguson, C. Genest and M. Hallin, Kendalls tau for serial depen-dence, Canadian J. Statist. 28 (2000), 587-604

    [12] B. Garel and M. Hallin, Rank-based autoregressive order identification,J. Am. Stat. Assoc. 94 (1999), 1357-1371

    [13] A. Groth, Visualization of coupling in time series by order recurrenceplots, Greifswald 2004, submitted

    [14] M. Hallin and J. Jureckova, Optimal tests for autoregressive modelsbased on autoregression rank scores, Annals Stat. 27 (1999), 1385-1414

    [15] M. Hallin and B.J.M. Werker, Optimal testing for semi-parametric ARmodels from Gaussian Lagrange multipliers to autoregression rankscores and adaptive tests, Asymptotics, Nonparametrics, and time series(ed. S. Gosh), Marcel Dekker, New York 1999, 295-350

    [16] K. Keller and H. Lauffer, Symbolic analysis of high-dimensional timeseries, Int. J. Bifurcation and Chaos 13 (2003), 2428-2432

    [17] R.L. Plackett, A reduction formula for normal multivariate integrals,

    Biometrika 41 (1954), 351-360[18] F.A. Shiha, Distributions of order patterns in time series, PhD disser-

    tation, Greifswald 2004

    [19] R.H. Shumway, D.S. Stoffer, Time Series Analysis and its Applications,Springer 2000

    [20] H. Tong, Non-linear time series, Oxford University Press 1990

    Christoph Bandt Faten ShihaInstitut fur Mathematik und Informatik Mathematics Department

    Arndt-Universitat Faculty of Science17487 Greifswald Mansoura UniversityGermany Mansoura, [email protected] [email protected]

    17

  • 7/27/2019 image processing paper.pdf

    20/26

    Preprint Series DFG-SPP 1114

    http://www.math.uni-bremen.de/zetem/DFG-Schwerpunkt/preprints/

    Reports

    [1] W. Horbelt, J. Timmer, and H.U. Voss. Parameter estimation in nonlinear delayedfeedback systems from noisy data. 2002 May. ISBN 3-88722-530-9.

    [2] A. Martin. Propagation of singularities. 2002 July. ISBN 3-88722-533-3.

    [3] T.G. Muller and J. Timmer. Fitting parameters in partial differential equations frompartially observed noisy data. 2002 August. ISBN 3-88722-536-8.

    [4] G. Steidl, S. Dahlke, and G. Teschke. Coorbit spaces and banach frames on homogeneous

    spaces with applications to the sphere. 2002 August. ISBN 3-88722-537-6.

    [5] J. Timmer, T.G. Muller, I. Swameye, O. Sandra, and U. Klingmuller. Modeling thenon-linear dynamics of cellular signal transduction. 2002 September. ISBN 3-88722-539-2.

    [6] M. Thiel, M.C. Romano, U. Schwarz, J. Kurths, and J. Timmer. Surrogate basedhypothesis test without surrogates. 2002 September. ISBN 3-88722-540-6.

    [7] K. Keller and H. Lauffer. Symbolic analysis of high-dimensional time series. 2002September. ISBN 3-88722-538-4.

    [8] F. Friedrich, G. Winkler, O. Wittich, and V. Liebscher. Elementary rigorous introduc-tion to exact sampling. 2002 October. ISBN 3-88722-541-4.

    [9] S.Albeverio and D.Belomestny. Reconstructing the intensity of non-stationary poisson.2002 November. ISBN 3-88722-544-9.

    [10] O. Treiber, F. Wanninger, H. Fuhr, W. Panzer, G. Winkler, and D. Regulla. An adaptivealgorithm for the detection of microcalcifications in simulated low-dose mammography.2002 November. ISBN 3-88722-545-7.

    [11] M. Peifer, J. Timmer, and H.U. Voss. Nonparametric identification of nonlinear oscil-lating systems. 2002 November. ISBN 3-88722-546-5.

    [12] S.M. Prigarin and G. Winkler. Numerical solution of boundary value problems forstochastic differential equations on the basis of the gibbs sampler. 2002 November.ISBN 3-88722-549-X.

    [13] A. Martin, S.M.Prigarin, and G. Winkler. Exact numerical algorithms for linear stochas-tic wave equation and stochastic klein-gordon equation. 2002 November. ISBN 3-88722-547-3.

    [14] Andreas Groth. Estimation of periodicity in time series by ordinal analysis with appli-cation to speech. 2002 November. ISBN 3-88722-550-3.

  • 7/27/2019 image processing paper.pdf

    21/26

    [15] H.U. Voss, J. Timmer, and J. Kurths. Nonlinear dynamical system identification fromuncertain and indirect measurements. 2002 December. ISBN 3-88722-548-1.

    [16] U. Clarenz, M. Droske, and M. Rumpf. Towards fast non-rigid registration. 2002December. ISBN 3-88722-551-1.

    [17] U. Clarenz, S. Henn, M. Rumpf, and K. Witsch. Relations between optimization andgradient flow with applications to image registration. 2002 December. ISBN 3-88722-552-X.

    [18] M. Droske and M. Rumpf. A variational approach to non-rigid morphological registra-tion. 2002 December. ISBN 3-88722-553-8.

    [19] T. Preuer and M. Rumpf. Extracting motion velocities from 3d image sequences andspatio-temporal smoothing. 2002 December. ISBN 3-88722-555-4.

    [20] K. Mikula, T. Preuer, and M. Rumpf. Morphological image sequence processing. 2002

    December. ISBN 3-88722-556-2.[21] V. Reitmann. Observation stability for controlled evolutionary variational inequalities.

    2003 January. ISBN 3-88722-557-0.

    [22] K. Koch. A new family of interpolating scaling vectors. 2003 January. ISBN 3-88722-558-9.

    [23] A. Martin. Small ball asymptotics for the stochastic wave equation. 2003 January.ISBN 3-88722-559-7.

    [24] P. Maa, T. Kohler, R. Costa, U. Parlitz, J. Kalden, U. Wichard, and C. Merkwirth.Mathematical methods for forecasting bank transaction data. 2003 January. ISBN

    3-88722-569-4.

    [25] D. Belomestny and H. Siegel. Stochastic and self-similar nature of highway traffic data.2003 February. ISBN 3-88722-568-6.

    [26] G. Steidl, J. Weickert, T. Brox, P. Mrazek, and M. Welk. On the equivalence of softwavelet shrinkage, total variation diffusion, and sides. 2003 February. ISBN 3-88722-561-9.

    [27] J. Polzehl and V. Spokoiny. Local likelihood modeling by adaptive weights smoothing.2003 February. ISBN 3-88722-564-3.

    [28] I. Stuke, T. Aach, C. Mota, and E. Barth. Estimation of multiple motions: regulariza-tion and performance evaluation. 2003 February. ISBN 3-88722-565-1.

    [29] I. Stuke, T. Aach, C. Mota, and E. Barth. Linear and regularized solutions for multiplemotions. 2003 February. ISBN 3-88722-566-X.

    [30] W. Horbelt and J. Timmer. Asymptotic scaling laws for precision of parameter estimatesin dynamical systems. 2003 February. ISBN 3-88722-567-8.

    [31] R. Dahlhaus and S. Subba Rao. Statistical inference of time-varying arch processes.2003 April. ISBN 3-88722-572-4.

  • 7/27/2019 image processing paper.pdf

    22/26

    [32] G. Winkler, A. Kempe, V. Liebscher, and O. Wittich. Parsimonious segmentation oftime series by potts models. 2003 April. ISBN 3-88722-573-2.

    [33] R. Ramlau and G. Teschke. Regularization of sobolev embedding operators and appli-cations. 2003 April. ISBN 3-88722-574-0.

    [34] K. Bredies, D. Lorenz, and P. Maa. Mathematical concepts of multiscale smoothing.2003 April. ISBN 3-88722-575-9.

    [35] A. Martin, S.M. Prigarin, and G. Winkler. Exact and fast numerical algorithms for thestochastic wave equation. 2003 May. ISBN 3-88722-576-7.

    [36] D. Maraun, W. Horbelt, H. Rust, J. Timmer, H.P. Happersberger, and F. Drepper.Identification of rate constants and non-observable absorption spectra in nonlinear bio-chemical reaction dynamics. 2003 May. ISBN 3-88722-577-5.

    [37] Q. Xie, M. Holschneider, and M. Kulesh. Some remarks on linear diffeomorphisms in

    wavelet space. 2003 July. ISBN 3-88722-582-1.[38] M.S. Diallo, M., Holschneider, M. Kulesh, F. Scherbaum, and F. Adler. Characterization

    of seismic waves polarization attributes using continuous wavelet transforms. 2003 July.ISBN 3-88722-581-3.

    [39] T. Maiwald, M. Winterhalder, A. Aschenbrenner-Scheibe, H.U. Voss, A. Schulze-Bonhage, and J. Timmer. Comparison of three nonlinear seizure prediction methods bymeans of the seizure prediction characteristic. 2003 September. ISBN 3-88722-594-5.

    [40] M. Kulesh, M. Holschneider, M.S. Diallo, Q. Xie, and F. Scherbaum. Modeling of wavedispersion using continuous wavelet transforms. 2003 October. ISBN 3-88722-595-3.

    [41] A.G.Rossberg, K.Bartholome, and J.Timmer. Data-driven optimal filtering for phaseand frequency of noisy oscillations: Application to vortex flow metering. 2004 January.ISBN 3-88722-600-3.

    [42] Karsten Koch. Interpolating scaling vectors. 2004 February. ISBN 3-88722-601-1.

    [43] O.Hansen, S.Fischer, and R.Ramlau. Regularization of mellin-type inverse problemswith an application to oil engeneering. 2004 February. ISBN 3-88722-602-X.

    [44] T.Aach, I.Stuke, C.Mota, and E.Barth. Estimation of multiple local orientations inimage signals. 2004 February. ISBN 3-88722-607-0.

    [45] C.Mota, T.Aach, I.Stuke, and E.Barth. Estimation of multiple orientations in multi-dimensional signals. 2004 February. ISBN 3-88722-608-9.

    [46] I.Stuke, T.Aach, E.Barth, and C.Mota. Analysing superimposed oriented patterns. 2004February. ISBN 3-88722-609-7.

    [47] Henning Thielemann. Bounds for smoothness of refinable functions. 2004 February.ISBN 3-88722-610-0.

    [48] Dirk A. Lorenz. Variational denoising in besov spaces and interpolation of hard andsoft wavelet shrinkage. 2004 March. ISBN 3-88722-605-4.

  • 7/27/2019 image processing paper.pdf

    23/26

    [49] V. Reitmann and H. Kantz. Frequency domain conditions for the existence of almostperiodic solutions in evolutionary variational inequalities. 2004 March. ISBN 3-88722-606-2.

    [50] Karsten Koch. Interpolating scaling vectors: Application to signal and image denoising.

    2004 May. ISBN 3-88722-614-3.

    [51] Pavel Mrazek, Joachim Weickert, and Andres Bruhn. On robust estimation and smooth-ing with spatial and tonal kernels. 2004 June. ISBN 3-88722-615-1.

    [52] Dirk A. Lorenz. Solving variational problems in image processing via projections - acommon view on tv denoising and wavelet shrinkage. 2004 June. ISBN 3-88722-616-X.

    [53] A.G. Rossberg, K.Bartholome, H.U. Voss, and J. Timmer. Phase synchronization fromnoisy univariate signals. 2004 August. ISBN 3-88722-617-8.

    [54] Markus Fenn and Gabriele Steidl. Robust local approximation of scattered data. 2004

    October. ISBN 3-88722-622-4.

    [55] Henning Thielemann. Audio processing using haskell. 2004 October. ISBN 3-88722-623-2.

    [56] M.Holschneider, M. S. Diallo, M. Kulesh, F. Scherbaum, M. Ohrnberger, and E. Luck.Characterization of dispersive surface wave using continuous wavelet transforms. 2004October. ISBN 3-88722-624-0.

    [57] M. S. Diallo, M. Kulesh, M. Holschneider, and F. Scherbaum. Instantaneous polarizationattributes in the time-frequency domain and wave field separation. 2004 October. ISBN3-88722-625-9.

    [58] Stephan Dahlke, Erich Novak, and Winfried Sickel. Optimal approximation of ellipticproblems by linear and nonlinear mappings. 2004 October. ISBN 3-88722-627-5.

    [59] Hanno Scharr. Towards a multi-camera generalization of brightness constancy. 2004November. ISBN 3-88722-628-3.

    [60] Hanno Scharr. Optimal filters for extended optical flow. 2004 November. ISBN 3-88722-629-1.

    [61] Volker Reitmann and Holger Kantz. Stability investigation of volterra integral equationsby realization theory and frequency-domain methods. 2004 November. ISBN 3-88722-

    636-4.[62] Cicero Mota, Michael Door, Ingo Stuke, and Erhardt Barth. Categorization of

    transparent-motion patterns using the projective plane. 2004 November. ISBN 3-88722-637-2.

    [63] Ingo Stuke, Til Aach, Erhardt Barth, and Cicero Mota. Multiple-motion estimation byblock-matching using markov random fields. 2004 November. ISBN 3-88722-635-6.

    [64] Cicero Mota, Ingo Stuke, Til Aach, and Erhardt Barth. Spatial and spectral analysisof occluded motions. 2004 November. ISBN 3-88722-638-0.

  • 7/27/2019 image processing paper.pdf

    24/26

    [65] Cicero Mota, Ingo Stuke, Til Aach, and Erhardt Barth. Estimation of multiple orien-tations at corners and junctions. 2004 November. ISBN 3-88722-639-9.

    [66] A. Benabdallah, A. Loser, and G. Radons. From hidden diffusion processes to hiddenmarkov models. 2004 December. ISBN 3-88722-641-0.

    [67] Andreas Groth. Visualization and detection of coupling in time series by order recur-rence plots. 2004 December. ISBN 3-88722-642-9.

    [68] M. Winterhalder, B. Schelter, J. Kurths, and J. Timmer. Sensitivity and specificity ofcoherence and phase synchronization analysis. 2005 January. ISBN 3-88722-648-8.

    [69] M. Winterhalder, B. Schelter, W. Hesse, K. Schwab, L. Leistritz, D. Klan, R. Bauer,J. Timmer, and H. Witte. Comparison of time series analysis techniques to detect directand time-varying interrelations in multivariate, neural systems. 2005 January. ISBN3-88722-643-7.

    [70] B. Schelter, M. Winterhalder, K. Schwab, L. Leistritz, W. Hesse, R. Bauer, H. Witte,and J. Timmer. Quantification of directed signal transfer within neural networks bypartial directed coherence: A novel approach to infer causal time-depending influencesin noisy, multivariate time series. 2005 January. ISBN 3-88722-644-5.

    [71] B. Schelter, M. Winterhalder, B. Hellwig, B. Guschlbauer, C.H. Lucking, and J. Tim-mer. Direct or indirect? graphical models for neural oscillators. 2005 January. ISBN3-88722-645-3.

    [72] B. Schelter, M. Winterhalder, T. Maiwald, A. Brandt, A. Schad, A. Schulze-Bonhage,and J. Timmer. Testing statistical significance of multivariate epileptic seizure predic-tion methods. 2005 January. ISBN 3-88722-646-1.

    [73] B. Schelter, M. Winterhalder, M. Eichler, M. Peifer, B. Hellwig, B. Guschlbauer, C.H.Lucking, R. Dahlhaus, and J. Timmer. Testing for directed influences in neuroscienceusing partial directed coherence. 2005 January. ISBN 3-88722-647-X.

    [74] Dirk Lorenz and Torsten Kohler. A comparison of denoising methods for one dimen-sional time series. 2005 January. ISBN 3-88722-649-6.

    [75] Esther Klann, Peter Maass, and Ronny Ramlau. Tikhonov regularization with waveletshrinkage for linear inverse problems. 2005 January.

    [76] Eduardo Valenzuela-Domnguez and Jurgen Franke. A bernstein inequality for strongly

    mixing spatial random processes. 2005 January. ISBN 3-88722-650-X.

    [77] J. Weickert, G. Steidl, P. Mrazek, M. Welk, and T. Brox. Diffusion filters and wavelets:What can they learn from each other? 2005 January.

    [78] M. Peifer, B. Schelter, M. Winterhalder, and J. Timmer. Mixing properties of the rosslersystem and consequences for coherence and synchronization analysis. 2005 January.ISBN 3-88722-651-8.

    [79] Ulrich Clarenz, Marc Droske, Stefan Henn, Martin Rumpf, and Kristian Witsch. Com-putational methods for nonlinear image registration. 2005 January.

  • 7/27/2019 image processing paper.pdf

    25/26

    [80] Ulrich Clarenz, Nathan Litke, and Martin Rumpf. Axioms and variational problems insurface parameterization. 2005 January.

    [81] Robert Strzodka, Marc Droske, and Matrin Rumpf. Image registration by a regularizedgradient flow - a streaming implementation in dx9 graphics hardware. 2005 January.

    [82] Marc Droske and Martin Rumpf. A level set formulation for willmore flow. 2005 January.

    [83] Hanno Scharr, Ingo Stuke, Cicero Mota, and Erhardt Barth. Estimation of transparentmotions with physical models for additional brightness variation. 2005 February.

    [84] Kai Krajsek and Rudolf Mester. Wiener-optimized discrete filters for differential motionestimation. 2005 February.

    [85] Ronny Ramlau and Gerd Teschke. Tikhonov replacement functionals for iterativelysolving nonlinear operator equations. 2005 March.

    [86] Matthias Muhlich and Rudolf Mester. Derivation of the tls error matrix covariance fororientation estimation using regularized differential operators. 2005 March.

    [87] Mamadou Sanou Diallo, Michail Kulesh, Matthias Holschneider, Kristina Kurennaya,and Frank Scherbaum. Instantaneous polarization attributes based on adaptive covari-ance method. 2005 March.

    [88] Robert Strzodka and Christoph Garbe. Real-time motion estimation and visualizationon graphics cards. 2005 April.

    [89] Matthias Holschneider and Gerd Teschke. On the existence of optimally localizedwavelets. 2005 April.

    [90] Gerd Teschke. Multi-frame representations in linear inverse problems with mixed multi-constraints. 2005 April.

    [91] Rainer Dahlhaus and Suhasini Subba Rao. A recursive online algorithm for the estima-tion of time-varying arch parameters. 2005 April.

    [92] Suhasini Subba Rao. On some nonstationary, nonlinear random processes and theirstationary approximations. 2005 April.

    [93] Suhasini Subba Rao. Statistical analysis of a spatio-temporal model with locationdependent parameters and a test for spatial stationarity. 2005 April.

    [94] Piotr Fryzlewicz, Theofanis Sapatinas, and Suhasini Subba Rao. Normalised leastsquares estimation in locally stationary arch models. 2005 April.

    [95] Piotr Fryzlewicz, Theofanis Sapatinas, and Suhasini Subba Rao. Haar-fisz techniquefor locally stationary volatility estimation. 2005 April.

    [96] Suhasini Subba Rao. On multiple regression models with nonstationary correlatederrors. 2005 April.

    [97] Sebastien Van Bellegam and Rainer Dahlhaus. Semiparametric estimation by modelselection for locally stationary processes. 2005 April.

  • 7/27/2019 image processing paper.pdf

    26/26

    [98] M. Griebel, T. Preuer, M. Rumpf, A. Schweitzer, and A. Telea. Flow field clusteringvia algebraic multigrid. 2005 April.

    [99] Marc Droske and Wolfgang Ring. A mumford-shah level-set approach for geometricimage registration. 2005 April.

    [100] M. Diehl, R. Kusters, and H. Scharr. Simultaneous estimation of local and globalparameters in image sequences. 2005 April.

    [101] H. Scharr, M.J. Black, and H.W. Haussecker. Image statistics and anisotropic diffusion.2005 April.

    [102] H. Scharr, M. Felsberg, and P.E. Forssen. Noise adaptive channel smoothing of low-doseimages. 2005 April.

    [103] H. Scharr and R. Kusters. A linear model for simultaneous estimation of 3d motion anddepth. 2005 April.

    [104] H. Scharr and R. Kusters. Simultaneous estimation of motion and disparity: Compari-son of 2-, 3- and 5-camera setups. 2005 April.

    [105] Christoph Bandt. Ordinal time series analysis. 2005 April.

    [106] Christoph Bandt and Faten Shiha. Order patterns in time series. 2005 April.