in Geophysics Synchronicity in predictive modelling: a new view … · 2020. 6. 19. ·...

12
Nonlin. Processes Geophys., 13, 601–612, 2006 www.nonlin-processes-geophys.net/13/601/2006/ © Author(s) 2006. This work is licensed under a Creative Commons License. Nonlinear Processes in Geophysics Synchronicity in predictive modelling: a new view of data assimilation G. S. Duane 1 , J. J. Tribbia 1 , and J. B. Weiss 2 1 National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307, USA 2 Department of Atmospheric and Oceanic Sciences, UCB 311, University of Colorado, Boulder, CO 80309, USA Received: 16 March 2006 – Revised: 15 August 2006 – Accepted: 5 September 2006 – Published: 3 November 2006 Abstract. The problem of data assimilation can be viewed as one of synchronizing two dynamical systems, one repre- senting “truth” and the other representing “model”, with a unidirectional flow of information between the two. Syn- chronization of truth and model defines a general view of data assimilation, as machine perception, that is reminiscent of the Jung-Pauli notion of synchronicity between matter and mind. The dynamical systems paradigm of the synchroniza- tion of a pair of loosely coupled chaotic systems is expected to be useful because quasi-2D geophysical fluid models have been shown to synchronize when only medium-scale modes are coupled. The synchronization approach is equivalent to standard approaches based on least-squares optimization, in- cluding Kalman filtering, except in highly non-linear regions of state space where observational noise links regimes with qualitatively different dynamics. The synchronization ap- proach is used to calculate covariance inflation factors from parameters describing the bimodality of a one-dimensional system. The factors agree in overall magnitude with those used in operational practice on an ad hoc basis. The calcu- lation is robust against the introduction of stochastic model error arising from unresolved scales. 1 Introduction A computational model of a physical process that provides a stream of new data to the model as it runs must include a scheme to combine the new data with the model’s predic- tion of the current state of the process. The goal of any such scheme is the optimal prediction of the future behavior of the physical process. While the relevance of the data assimila- tion problem is thus quite broad, techniques have been in- vestigated most extensively for weather modeling, because Correspondence to: G. S. Duane ([email protected]) of the high dimensionality of the fluid dynamical state space, and the frequency of potentially useful new observational in- put. Existing data assimilation techniques (3DVar, 4DVar, Kalman Filtering, and Ensemble Kalman Filtering) combine observed data with the most recent forecast of the current state to form a best estimate of the true state of the atmo- sphere, each approach making different assumptions about the nature of the errors in the model and the observations. An alternative view of the data assimilation problem is suggested here. The objective of the process is not to “now- cast” the current state of reality, but to make the model con- verge to reality in the future. Recognizing also that a pre- dictive model, especially a large one, is a semi-autonomous dynamical system in its own right, influenced but not deter- mined by observational input from a co-existing reality, it is seen that the guiding principle that is needed is one of syn- chronism. That is, we seek to introduce a one-way coupling between reality and model, such that the two tend to be in the same state, or in states that in some way correspond, at each instant of time. The problem of data assimilation thus reduces to the problem of synchronization of a pair of dy- namical systems, unidirectionally coupled through a noisy channel that passes a limited number of “observed” variables. While the synchronization of loosely coupled regular os- cillators with limit cycle attractors is ubiquitous in nature (Strogatz, 2003), synchronization of chaotic oscillators has only been explored recently, in a wave of research spurred by the seminal work of Pecora and Carroll (1990). Chaos synchronization can be surprising because it implies that two systems, each effectively unpredictable, connected by a sig- nal that can be virtually indistinguishable from noise, nev- ertheless exhibit a predictable relationship. Chaos synchro- nization has indeed been used to predict new kinds of weak teleconnection patterns relating different sectors of the global climate system (Duane, 1997; Duane et al., 1999; Duane and Tribbia, 2004). Published by Copernicus GmbH on behalf of the European Geosciences Union and the American Geophysical Union.

Transcript of in Geophysics Synchronicity in predictive modelling: a new view … · 2020. 6. 19. ·...

  • Nonlin. Processes Geophys., 13, 601–612, 2006www.nonlin-processes-geophys.net/13/601/2006/© Author(s) 2006. This work is licensedunder a Creative Commons License.

    Nonlinear Processesin Geophysics

    Synchronicity in predictive modelling: a new view of dataassimilation

    G. S. Duane1, J. J. Tribbia 1, and J. B. Weiss2

    1National Center for Atmospheric Research, PO Box 3000, Boulder, CO 80307, USA2Department of Atmospheric and Oceanic Sciences, UCB 311, University of Colorado, Boulder, CO 80309, USA

    Received: 16 March 2006 – Revised: 15 August 2006 – Accepted: 5 September 2006 – Published: 3 November 2006

    Abstract. The problem of data assimilation can be viewedas one of synchronizing two dynamical systems, one repre-senting “truth” and the other representing “model”, with aunidirectional flow of information between the two. Syn-chronization of truth and model defines a general view ofdata assimilation, as machine perception, that is reminiscentof the Jung-Pauli notion of synchronicity between matter andmind. The dynamical systems paradigm of the synchroniza-tion of a pair of loosely coupled chaotic systems is expectedto be useful because quasi-2D geophysical fluid models havebeen shown to synchronize when only medium-scale modesare coupled. The synchronization approach is equivalent tostandard approaches based on least-squares optimization, in-cluding Kalman filtering, except in highly non-linear regionsof state space where observational noise links regimes withqualitatively different dynamics. The synchronization ap-proach is used to calculate covariance inflation factors fromparameters describing the bimodality of a one-dimensionalsystem. The factors agree in overall magnitude with thoseused in operational practice on an ad hoc basis. The calcu-lation is robust against the introduction of stochastic modelerror arising from unresolved scales.

    1 Introduction

    A computational model of a physical process that providesa stream of new data to the model as it runs must includea scheme to combine the new data with the model’s predic-tion of the current state of the process. The goal of any suchscheme is the optimal prediction of the future behavior of thephysical process. While the relevance of the data assimila-tion problem is thus quite broad, techniques have been in-vestigated most extensively for weather modeling, because

    Correspondence to:G. S. Duane([email protected])

    of the high dimensionality of the fluid dynamical state space,and the frequency of potentially useful new observational in-put. Existing data assimilation techniques (3DVar, 4DVar,Kalman Filtering, and Ensemble Kalman Filtering) combineobserved data with the most recent forecast of the currentstate to form a best estimate of the true state of the atmo-sphere, each approach making different assumptions aboutthe nature of the errors in the model and the observations.

    An alternative view of the data assimilation problem issuggested here. The objective of the process is not to “now-cast” the current state of reality, but to make the model con-verge to reality in the future. Recognizing also that a pre-dictive model, especially a large one, is a semi-autonomousdynamical system in its own right, influenced but not deter-mined by observational input from a co-existing reality, it isseen that the guiding principle that is needed is one of syn-chronism. That is, we seek to introduce a one-way couplingbetween reality and model, such that the two tend to be inthe same state, or in states that in some way correspond, ateach instant of time. The problem of data assimilation thusreduces to the problem of synchronization of a pair of dy-namical systems, unidirectionally coupled through a noisychannel that passes a limited number of “observed” variables.

    While the synchronization of loosely coupled regular os-cillators with limit cycle attractors is ubiquitous in nature(Strogatz, 2003), synchronization of chaotic oscillators hasonly been explored recently, in a wave of research spurredby the seminal work of Pecora and Carroll (1990). Chaossynchronization can be surprising because it implies that twosystems, each effectively unpredictable, connected by a sig-nal that can be virtually indistinguishable from noise, nev-ertheless exhibit a predictable relationship. Chaos synchro-nization has indeed been used to predict new kinds of weakteleconnection patterns relating different sectors of the globalclimate system (Duane, 1997; Duane et al., 1999; Duane andTribbia, 2004).

    Published by Copernicus GmbH on behalf of the European Geosciences Union and the American Geophysical Union.

  • 602 G. S. Duane et al.: Synchronicity in predictive modelling

    It is now clear that chaos synchronization is surprisinglyeasy to arrange, in both ODE and PDE systems (Kocarev etal., 1997; Duane and Tribbia, 2001, 2004). A pair of spatiallyextended chaotic systems such as two quasi-2D fluid models,if coupled at only a discrete set of points and intermittentlyin time, can be made to synchronize completely. The appli-cation of chaos synchronization to the tracking of one dy-namical system by another was proposed by So et al. (1994),so the synchronization of the fluid models suggests a natu-ral extension to meteorological data assimilation that has notheretofore been recognized.

    Since the problem of data assimilation arises in any situ-ation requiring a computational model of a parallel physicalprocess to track that process as accurately as possible basedon limited input, it is suggested here that the broadest viewof data assimilation is that of machine perception by an arti-ficially intelligent system. Indeed, the new field of DynamicData Driven Application Systems (DDDAS) is defined as thereal-time modelling of evolving physical systems based onselect observations1. Like a data assimilation system, thehuman mind forms a model of reality that functions well, de-spite limited sensory input, and one would like to impart suchan ability to the computational model. In the artificial intelli-gence view of data assimilation, the additional issue of modelerror can be approached naturally as a problem of machinelearning, as discussed in the concluding section.

    In this more general context, the role of synchronism isreminiscent of the psychologist Carl Jung’s notion of syn-chronicity in his view of the relationship between mind andthe material world. Jung had noted uncanny coincidences or“synchronicities” between mental and physical phenomena.In collaboration with Wolfgang Pauli (Jung and Pauli, 1955),he took such relationships to reflect a new kind of order con-necting the two realms. (The new order was taken to explainrelationships between seemingly unconnected phenomena inthe objective world as well.) It was important to Jung andPauli that synchronicities themselves were distinct, isolatedevents, but as described in Sect. 2.1, such phenomena canemerge naturally as a degraded form of chaos synchroniza-tion.

    A principal question that is addressed in this paper iswhether the synchronization view of data assimilation ismerely an appealing reformulation of standard treatments,or is different in substance. The first point to be made isthat all standard data assimilation approaches, if successful,do achieve synchronization, so that synchronization defines amore general family of algorithms that includes the standardones. It remains to determine whether there are synchroniza-tion schemes that lead to faster convergence than the stan-dard data assimilation algorithms. It is shown here analyt-ically that optimal synchronization is equivalent to Kalmanfiltering when the dynamics change slowly in phase space,so that the same linear approximation is valid at each point

    1http://www.dddas.org

    in time for the real dynamical system and its model. Whenthe dynamics change rapidly, as in the vicinity of a regimetransition, one must consider the full nonlinear equations andthere are better synchronization strategies than the one givenby Kalman filtering or ensemble Kalman filtering. The de-ficiencies of the standard methods, which are well known insuch situations, are usually remedied by ad hoc corrections,such as “covariance inflation” (Anderson, 2001). In the syn-chronization view, such corrections can be derived from firstprinciples.

    This paper takes a broad view of data assimilation by amodel system, defined as a set of differential equations, thatis coupled to noisy data obtained from a “true system”, de-fined by the same set of differential equations, with the pos-sible addition of a stochastic term to represent model error.We begin by reviewing the phenomenology of chaos syn-chronization generally in Sect. 2.1, and an application to geo-physical fluid systems in Sect. 2.2. A brief review of standarddata assimilation is provided in Sect. 2.3. In Sect. 3 the syn-chronization approach is compared to standard approaches.The optimal synchronization problem for a coupled pair ofstochastic differential equations is framed as a problem offinding the coupling that gives the tightest synchronizationin a linear approximation with observational noise. The opti-mal coupling thus derived can be compared to forms used instandard data assimilation. The difference becomes large inregions of state-space where nonlinearities are important. InSect. 4, a comparison of the two approaches for the full non-linear case is used to estimate covariance inflation factors thatwould be needed to adjust the Kalman filter scheme to giveoptimal synchronization, for both perfect models and modelsincluding stochastic error from unresolved scales. Section 5concludes by expanding on the view of data assimilation asmachine perception and discussing automatic model adapta-tion in the synchronization framework.

    2 Background: synchronized chaos and data assimila-tion

    2.1 Chaos synchronization

    The phenomenon of chaos synchronization was first broughtto light by Fujisaka and Yamada (1983) and independentlyby Afraimovich et al. (1986). Extensive research on the syn-chronization of chaotic systems in the ’90s was spurred bythe work of Pecora and Carroll (1991), who found that twoLorenz (1963) systems would synchronize when theX or Yvariable of one was slaved to the respectiveX or Y variableof the other, despite sensitive dependence on initial values ofthe other variables. (Synchronization does not occur if theZvariables are analogously coupled.)

    In this paper we consider a weakerdiffusiveform of cou-pling, as illustrated by the following pair of bidirectionally

    Nonlin. Processes Geophys., 13, 601–612, 2006 www.nonlin-processes-geophys.net/13/601/2006/

    http://www.dddas.org

  • G. S. Duane et al.: Synchronicity in predictive modelling 603

    coupled Lorenz systems:

    Ẋ = σ(Y − Z)+ α(X1 −X)

    Ẏ = ρX − Y −XZ

    Ż = −βZ +XY

    (1)

    Ẋ1 = σ(Y1 − Z1)+ α(X −X1)

    Ẏ1 = ρX1 − Y1 −X1Z1

    Ż1 = −βZ1 +X1Y1

    where α parameterizes the coupling strength. The twoLorenz systems synchronize rapidly for appropriate valuesof α, and also do so for unidirectional coupling, defined byremoving the term inα from the first equation.

    For a pair of coupled systems that are not identical, as withan imperfect model of a physical system, synchronizationmay still occur, but the correspondence between the statesof the two systems, that defines thesynchronization manifoldin state space is different from the identity. In this situation,known asgeneralized synchronization, we have two differentdynamical systems ẋ=F(x) and ẏ=G(y), with x, y∈Rn,modified in some manner so as to define two coupled sys-temsẋ=F̂ (x, y) and ẏ=Ĝ(y, x). The systems are said tobe generally synchronized iff there is some locally invertiblefunction8 : Rn→Rn such that||8(x)−y||→0 as t→∞(Rulkov et al., 1995). Generalized synchronization can beshown to occur even for very different systems, as with aRossler system coupled to a Lorenz system, but with a corre-spondence function8 that is nowhere smooth (Pecora et al.,1997).

    It is commonly not the existence, but the stability of thesynchronization manifoldin state space that distinguishescoupled systems exhibiting synchronization from those thatdo not (such as Eq.1 for different values ofα). As thecoupling is weakened, bursts of desynchronization (a specialcase of on-off intermittency) interrupt the synchronized be-havior. On-off synchronization, that can also arise from noisein the communication channel between the two systems, isa second way that identical synchronization is found to de-grade (Ashwin et al., 1994). In the data assimilation appli-cation, it corresponds to “catastrophes” of large model driftthat can arise from observational noise (Baek et al., 2004).

    2.2 Synchronization between geophysical fluid systems

    Pairs of 1D PDE systems of various types, coupled diffu-sively at discrete points in space and time, were shown tosynchronize by Kocarev et al. (1997). Synchronization ingeophysical fluid models was demonstrated by Duane andTribbia (2001), originally with a view toward predicting andexplaining new families of long-range teleconnections (Du-ane and Tribbia, 2004).

    The uncoupled single-system model, derived from onedescribed by Vautard et al. (1988), is given by the quasi-

    channel A channel Bforcing

    a) b)

    n=0

    c) d)

    n=2000

    e) f)

    FIGURE 1.

    Fig. 1. Streamfunction (in units of 1.48×109 m2 s−1) describingthe forcingψ∗ (a, b), and the evolving flowψ (c–f), in a parallelchannel model with bidirectional coupling of medium scale modesfor which |kx |>kx0=3 or |ky |>ky0=2, and|k|≤15, for the indi-cated numbersn of time steps in a numerical integration. Parame-ters are as in Duane and Tribbia (2004). An average streamfunctionfor the two vertical layersi=1,2 is shown. Synchronization occursby the last time shown(e, f), despite differing initial conditions.

    geostrophic equation for potential vorticityq in a two-layerreentrant channel on aβ-plane:

    Dqi

    Dt≡∂qi

    ∂t+ J (ψi, qi) = Fi +Di (2)

    where the layeri=1, 2, ψ is streamfunction, and the Jaco-bianJ (ψ, ·)= ∂ψ

    ∂x∂·∂y

    −∂ψ∂y

    ∂·∂x

    gives the advective contributionto the Lagrangian derivativeD/Dt . Equation (2) states thatpotential vorticity is conserved on a moving parcel, exceptfor forcing Fi and dissipationDi . The discretized potentialvorticity is

    qi = f0 + βy + ∇2ψi + R

    −2i (ψ1 − ψ2)(−1)

    i (3)

    wheref (x, y) is the vorticity due to the Earth’s rotation ateach point(x, y), f0 is the averagef in the channel,β is theconstantdf/dy andRi is the Rossby radius of deformationin each layer. The forcingF is a relaxation term designedto induce a jet-like flow near the beginning of the channel:Fi=µ0(q

    i −qi) for q∗

    i corresponding to the choice ofψ∗

    shown in Fig.1a. The dissipation termsD, boundary con-ditions, and other parameter values are given in Duane andTribbia (2004).

    Two models of the form (2), DqA/Dt=FA+DA andDqB/Dt=FB+DB were coupled diffusively in one direc-tion by modifying one of the forcing terms:

    FBk = µck[q

    Ak − q

    Bk ] + µ

    extk [q

    k − qBk ] (4)

    where the flow has been decomposed spectrally and the sub-scriptk on each quantity indicates the wave numberk spec-tral component. (The layer indexi has been suppressed.)

    www.nonlin-processes-geophys.net/13/601/2006/ Nonlin. Processes Geophys., 13, 601–612, 2006

  • 604 G. S. Duane et al.: Synchronicity in predictive modelling

    channel A channel B

    n=40:

    n=280:

    FIGURE 2.

    T

    A

    B

    A

    A

    B

    A

    T

    a)

    b)

    FIGURE 3.

    Fig. 2. Flow results as in Fig.1, but with the inter-channel cou-pling restricted to the 10-local-bred-vector subspace as in Eq. (5).Synchronization is apparent by the time step shown.

    The two sets of coefficientsµck

    andµextk

    were chosen to cou-ple the two channels in some medium range of wavenumbersand to force each channel only with the low wavenumbercomponents of the background flow.

    It was found that the two channels rapidly synchronize ifonly the medium scale modes are coupled (Fig.1), startingfrom initial flow patterns that are arbitarily set equal to theforcing in one channel, and to a rather different pattern in theother channel. (Results are shown for bidirectional couplingdefined by adding an equation forFA

    kanalogous to Eq.4.

    The synchronization behavior for coupling in just one di-rection is very similar.) With unidirectional coupling, thesynchronization effects data assimilation from theA channelinto theB channel.

    One question about data assimilation that may be ad-dressed in the synchronization context concerns the defi-nition and minimum number of variables that must be as-similated to give adequate predictive skill. It has been ar-gued, for instance, that atmospheric dynamics is locally low-dimensional, and that a small number of locally selected bredvectors spans the effective state space in each local region(Patil et al., 2001). Bred vectors are commonly used to spec-ify likely directions of forecast error (Toth and Kalnay, 1993,1997).

    If Patil et al. (2001)’s argument about low “BV-dimension”is correct, then it should only be necessary to couple twochannels in the subspace defined by the properly chosen bredvectors, in each local region, to synchronize the two chan-nels. That is, it should be possible to replace Eq. (4) by

    FB(x, y) = µBV∑i

    [(qA − qB) · bi]kbi(x, y)+ FBext(x, y) (5)

    for (x, y)∈rk, where thebi for i=1...10 are an orthonormalset of vectors formed from ten bred vectorsBi by Gram-Schmidt orthogonalization in each local region separately.That is, the bred vectorsBi are first computed globally forthe channel as a whole, as in Toth and Kalnay (1997). Then,the channel is divided rectangularly into a 20×16 patch-work of local regionsrk k=1...320. The set of vectorsbi isformed fromBi by Gram-Schmidt orthogonalization of thesetBi(x, y):(x, y)∈rk for eachrk and concatenating the re-

    sulting vectors over all regions. The dot product in bracketsin Eq. (5) is computed separately for each local region:

    [v · w]k ≡∑

    (x,y)∈rk

    v(x, y)w(x, y) (6)

    so that 320×10=3200 independent coefficients[(qA −qB)·bi]k are computed at each instant of time. Thus[bi ·bj ]k=δij for all k. The overall coupling strength is givenby µBV , and the external forcing by the jet is defined byFBextk ≡µ

    extk [q

    k−qBk ] as before.

    It is found that two channels coupled in a truncated bredvector basis according to Eq. (5) do synchronize, as illus-trated in Fig.2, and do not synchronize with fewer indepen-dent regions or if fewer bred vectors are used to define thecoupling subspace. The total number of independent coef-ficients, however, is comparable to or larger than the totalnumber of Fourier components in the mid-range of scalesthat was seen to be effective for synchronization in the cou-pling scheme (4).

    The lesson is that the synchronization phenomenon doesnot appear to be very sensitive to the detailed choice of cou-pling subspace. A similar conclusion was reached by Yang etal. (2004) for synchronizing Lorenz systems. Those authorsobtained only small improvement by using bred vectors orsingular vectors instead of single-variable coupling. In thepresent case of spatially extended models, it seems that anybasis that captures the essential physical phenomena in eachlocal region, phenomena that can be described in terms of amiddle range of scales, is adequate for synchronization andhence for data assimilation.

    2.3 Data assimilation

    Standard data assimilation, unlike synchronization, estimatesthe current statexT ∈Rn of one system, “truth”, from thestate of a model systemxB∈Rn, combined with noisy obser-vations of truth. The best estimate of truth is the “analysis”xA, which is the state that minimizes error as compared toall possible linear combinations of observations and model.That is

    xA ≡ xB + 3(xobs− xB) (7)

    minimizes the analysis error for a stochasticdistribution given byxobs=xT+ξ whereξ is observationalnoise, for properly chosenn×n gain matrix3. The standardmethods to be considered in this paper correspond to specificforms for the generally time-dependent matrix3.

    The simple method known as 3dVar uses a time-independent3 that is based on the time-averaged statisticalproperties of the observational noise and the resulting fore-cast error. Let the matrix

    R ≡< ξξT >=< (xobs− xT )(xobs− xT )T > (8)

    be the observation error covariance, and the matrix

    B ≡< (xB − xT )(xB − xT )T > (9)

    Nonlin. Processes Geophys., 13, 601–612, 2006 www.nonlin-processes-geophys.net/13/601/2006/

  • G. S. Duane et al.: Synchronicity in predictive modelling 605

    be the “background” error covariance, describing the devia-tion of the model state from the true state. If both covariancematrices are assumed to be constant in time, then the optimallinear combination of background and observations is:

    xA = R(R + B)−1xB + B(R + B)−1xobs (10)

    The formula (10), which simply states that observations areweighted more heavily when background error is greater andconversely, defines the 3dVar method in practical data as-similation, based on empirical estimates ofR and B. The4dVar method, which will not be considered here, general-izes Eq. (10) to estimate a short history of true states from acorresponding short history of observations.

    The Kalman filtering method, that is popular for a vari-ety of tracking problems, uses the dynamics of the model toupdate the background error covarianceB sequentially. Theanalysis at each assimilation cyclei is:

    xiA = R(R + Bi)−1xiB + B

    i(R + Bi)−1xiobs (11)

    where the backgroundxiB is formed from the previous anal-ysisxi−1A simply by running the modelM:R

    n→Rn

    xiB = Mi−1→i(xi−1A ) (12)

    as is done in 3dVar. But now the background error is updatedaccording to

    Bi = M i−1→iAi−1MTi−1→i + Q (13)

    where A is the analysis error covarianceA≡, given conveniently byA−1=B−1+R−1. The matrixM is thetangent linear modelgiven by

    Mab ≡∂Mb

    ∂xa

    ∣∣∣∣x=xA

    (14)

    The update formula (13) gives the minimum analysis error=T rA at each cycle. The termQ is the covari-ance of the error in the model itself, as discussed in Sect. 4.

    3 Comparison of synchronization with standard meth-ods of data assimilation

    3.1 Optimal coupling for synchronization of stochastic dif-ferential equations

    To compare synchronization to standard data assimilation,we inquire as to the coupling that is optimal for synchroniza-tion, so that this coupling can be compared to the gain matrixused in the standard 3dVar and Kalman filtering schemes.The general form of coupling of truth to model that we con-sider in this section is given by a system of stochastic differ-ential equations:

    ẋT = f (xT )

    ẋB = f (xB)+ C(xT − xB + ξ) (15)

    where true statexT ∈Rn and the model statexB∈Rn evolveaccording to the same dynamics, given byf , and where thenoise ξ in the coupling (observation) channel is the onlysource of stochasticity. The form (15) is meant to includedynamicsf described by partial differential equations, as inthe last section. The system is assumed to reach an equi-librium probability distribution, centered on the synchro-nization manifoldxB=xT . The goal is to choose a time-dependent matrixC so as to minimize the spread of the dis-tribution.

    Note that if C is a projection matrix, or a multiple ofthe identity, then Eq. (15) effects a form of nudging. Butfor arbitraryC, the scheme is much more general. Indeed,continuous-time generalizations of 3DVar and Kalman filter-ing can be put in the form (15).

    Let us assume that the dynamics vary slowly in state space,so that the JacobianF≡Df , at a given instant, is the same forthe two systems

    Df (xB) = Df (xT ) (16)

    where terms ofO(xB−xT ) are ignored. Then the differencebetween the two Eqs. (15), in a linearized approximation, is

    ė = Fe − Ce + Cξ (17)

    wheree≡xB−xT is the synchronization error.The stochastic differential equation (17) implies a de-

    terministic partial differential equation, the Fokker-Planckequation, for the probability distributionρ(e):

    ∂ρ

    ∂t+ ∇e · [ρ(F − C)e] =

    1

    2δ∇e · (CRCT∇eρ) (18)

    whereR= is the observation error covariance matrix,and δ is a time-scale characteristic of the noise, analogousto the discrete time between molecular kicks in a Brownianmotion process that is represented as a continuous process inEinstein’s well known treatment. Equation (18) states thatthe local change inρ is given by the divergence of a proba-bility currentρ(F−C)e except for random “kicks” due to thestochastic term.

    The PDF can be taken to have the Gaussian formρ=N exp(−eTKe), where the matrixK is the inverse spread,andN is a normalization factor, chosen so that

    ∫ρdne=1.

    For background error covarianceB, K=(2B)−1. In the one-dimensional case,n=1, whereC andK are scalars, substitu-tion of the Gaussian form in Eq. (18), for the stationary casewhere∂ρ/∂t=0 yields:

    2B(C − F) = δRC2 (19)

    SolvingdB/dC=0, it is readily seen thatB is minimized (Kis maximized) whenC=2F=(1/δ)B/R.

    In the multidimensional case,n>1, the relation (19) gen-eralizes to thefluctuation-dissipation relation

    B(C − F)T + (C − F)B = δCRCT (20)

    www.nonlin-processes-geophys.net/13/601/2006/ Nonlin. Processes Geophys., 13, 601–612, 2006

  • 606 G. S. Duane et al.: Synchronicity in predictive modelling

    channel A channel B

    n=40:

    n=280:

    FIGURE 2.

    T

    A

    B

    A

    A

    B

    A

    T

    a)

    b)

    FIGURE 3.

    Fig. 3. An analysis cycle, with trajectories shown for the true state“T”, the model evolving from the initial analysis “A” to the nextforecast, or background “B”, and an alternative model run (dot-ted line) starting from an inferior “analysis” that is further fromthe initial truth. In the case(a) whereDf (xT )=Df (xA), then aworse analyis will always produce a worse forecast, but in the gen-eral case(b) whereDf (xT ) 6=Df (xA), nonlinearities may allow aworse analysis to evolve to a better forecast (the trajectories do notactually cross).

    that can be obtained directly from the stochastic differentialequation (17) by a standard proof that is reproduced in Ap-pendix A.B can then be minimized element-wise. Differen-tiating the matrix equation (20) with respect to the elementsof C, we find

    dB(C − F)T + B(dC)T + (dC)B + (C − F)dB

    = δ[(dC)RCT + CR(dC)T ] (21)

    where the matrixdC represents a set of arbitrary incrementsin the elements ofC, and the matrixdB represents the re-sulting increments in the elements ofB. SettingdB=0, wehave

    [B − δCR](dC)T + (dC)[B − δRCT ] = 0 (22)

    Since the matricesB and R are each symmetric, the twoterms in Eq. (22) are transposes of one another. It is eas-ily shown that the vanishing of their sum, for arbitrarydC,implies the vanishing of the factors in brackets in Eq. (22).ThereforeC=(1/δ)BR−1, as in the 1D case.

    3.2 Optimal synchronization vs. least-squares data assimi-lation

    Turning now to the standard methods, so that a comparisoncan be made, it is recalled that the analysisxA after eachcycle is given by:

    xA = R(R + B)−1xB + B(R + B)−1xobs= xB + B(R + B)−1(xobs− xB) (23)

    In 3dVar, the background error covariance matrixB is fixed;in Kalman filtering it is updated after each cycle using thelinearized dynamics. The background for the next cycle iscomputed from the previous analysis by integrating the dy-namical equations:

    xn+1B = xnA + τf (x

    nA) (24)

    where τ is the time interval between successive analyses.Thus the forecasts satisfy a difference equation:

    xn+1B = xnB + B(R + B)

    −1(xnobs− xnB)+ τf (x

    nA) (25)

    We model the discrete process as a continuous process inwhich analysis and forecast are the same:

    ẋB = f (xB) + 1/τB(B + R)−1(xT − xB + ξ)

    + O[(B(B + R)−1)2] (26)

    using the white noiseξ to represent the difference betweenobservationxobs and truthxT . The continuous approxima-tion is valid so long asf varies slowly on the time-scaleτ .

    It is seen that when background error is small compared toobservation error, the higher order termsO[(B(B+R)−1)2]can be neglected and the optimal couplingC=1/δBR−1 isjust the form that appears in the continuous data assimila-tion equation (26), for δ=τ . Thus under the linear assump-tion thatDf (xB)=Df (xT ), the synchronization approach isequivalent to 3dVar in the case of constant background error,and to Kalman filtering if background error is dynamicallyupdated over time. The equivalence can also be shown for anexact description of the discrete analysis cycle, by comparingit to a coupled pair of synchronized maps. See Appendix B.

    The equivalence between synchronization and standardmethods in the linear case actually follows easily from acomparison of the optimization principles that define the twoapproaches. In the standard approaches, the form (23) min-imizes the expected value of(xA−xT )2, as compared to allother linear combinations ofxobs andxB . But if double lin-earization (16) holds then the minimization ofimplies the minimization of at any futuretime.

    To see this, first consider the background error after a pe-riod of time τ , just before the next analysis, as in Fig.3a.If double linearization (16) holds, then this projected back-ground error is related to the initial analysis error by

    e(t) = T

    [exp

    ∫ tto

    F(t ′)dt ′]

    e(to) ≡ Me(to) (27)

    where the notationT before the expression in brackets de-notes time-ordering. The expectations are related by

    B==MMT=MAM T (28)

    If we consider a more general “analysis”x3A formed from ageneral linear combination of forecast and observations

    x3A ≡ xB + 3(xobs− xB) (29)

    Nonlin. Processes Geophys., 13, 601–612, 2006 www.nonlin-processes-geophys.net/13/601/2006/

  • G. S. Duane et al.: Synchronicity in predictive modelling 607

    and compute a general “analysis error covariance”A3≡ accordingly, we seek tominimize the traceT rA3=. But it is readilyshown that a solution3 of d3T rA3 = 0 is also a solutionof d3T rM(t)A3MT (t)=0 for 0

  • 608 G. S. Duane et al.: Synchronicity in predictive modelling

    by X1, Y1, Z1, based on “observations” of theX, Y,Z sys-tem. It is seen that the Kalman filter approach (Fig.4b)gives bursts of desynchronization just after the transitions,as expected, unlike the coupling (31) (Fig. 4a), although theKalman filter performance is better on the average. (Theplots in Fig.4 are averages over a large number of realiza-tions of the stochastic process. Each realization is basedon exactly the same master system trajectory. The desyn-chronization phenomenon occurs when the Lorenz/reversedLorenz transition takes place at certain points on the mastersystem attractor, but not when it takes place at other points.)

    There are several lessons to be learned from this extremeand somewhat artificial example. First, the optimal synchro-nization scheme, with time-varying coupling, would reduceboth the average error and the bursting phenomenon. Thatwould be the ideal way to do data assimilation. But sec-ond, the Kalman filtering approach is almost always betterthan any of the coupling algorithms described in the synchro-nization literature. Specifically, the bursts (corresponding to“catastrophes” in the data assimilation literature) are short-lived even in this extreme example, lasting for only one ortwo time steps.

    In the geophysical realm, effective non-autonomy is com-mon, as with phenomena influenced by the diurnal or annualcycles for example. More generally, the highly nonlinear re-gions of phase space where the assumption (16) fails, andthe optimality of the Kalman filter is expected to break downcorrespond to regime transitions. It is in such regions, thattypically occupy small volumes of phase space, where thesynchronization approach is expected to improve on standardmethods to a small degree.

    4 Synchronization vs. data assimilation for stronglynonlinear dynamics

    In a region of state space where nonlinearities are strong andEq. (16) fails, the prognostic equation for error (17) must beextended to incorporate nonlinearities. Additionally, modelerror due to processes on small scales that escape the digi-tal representation should be considered. While errors in theparameters or the equations for the explicit degrees of free-dom require deterministic corrections, the unresolved scales,assumed dynamically independent, can only be representedstochastically. The physical system is governed by:

    ẋT = f (xT )− ξM (32)

    in place of Eq. (15a), whereξM is model error, with covari-anceQ≡. The error equation (17) becomes

    ė = (F − C)e +Ge2 +He3 + Cξ + ξM (33)

    where we have included terms up to cubic order ine, withH

  • G. S. Duane et al.: Synchronicity in predictive modelling 609

    a)

    -3 -2 -1 1 2 3x

    -1.5

    -1

    -0.5

    0.5

    1

    1.5

    f

    -d1 d2

    b) 1.25 1.5 1.75 2.25 2.5 2.75 3C

    0.16

    0.17

    0.18

    B

    c) -1 -0.5 0.5 1 e

    0.2

    0.4

    0.6

    0.8

    PDF

    d)1.25 1.5 1.75 2.25 2.5 2.75 3 C

    0.021

    0.022

    0.023

    0.024

    B

    FIGURE 5.

    Fig. 5. For a true systeṁx=f (x), with f defined by (38) describ-ing a two-well potential aboutx=xo = 0, (a), background errorB is plotted against coupling strengthC for bimodality parametersd1=1.06, d2=1.26, observation errorR=1 and inter-observationtime τ=0.1 (b). Although the distribution of true states is bimodal,the error distributionρ(e) is unimodal(c). B vs. C for a shorterinter-observation timeτ=0.01 (d) gives a covariance inflation fac-tor near unity (see text).

    Matching the gross structure of the two-well potential tothe gross structure of the Lorenz ’63 or Lorenz ’84 systems,one can find values ofG andH , that will give the correctdistances between the central fixed point and the two out-lying fixed points, respectively. For the Lorenz ’84 system,these distances ared1=1.06 andd2=1.26 respectively. Theprognostic equation for error (33) has fixed points in the de-sired positions iffG=.15 andH=−.75, in a re-scaled timecoordinate for whichF=1.

    For the perfect model case,Q=0, the functionB(C) isthen found numerically to have the form plotted in Fig.5b,with a distinct optimum at B=0.15, C=1.5, where the inter-observation timeτ=0.1 (6 h in dimensional units) is chosento be an order of magnitude smaller than the dynamical timescaleF−1 and we assume a large observation errorR=1, ofthe same order as the typical displacement ofx about the as-sumed unstable fixed point. Note that although the true stateof the system is distributed bimodally, the distribution (35)of synchronization errorρ(e) shown in Fig.5c is unimodal,because of smearing by observational noise.

    The coupling that gives optimal synchronization can againbe compared with the coupling used in standard data assim-ilation, as for the linear case. In particular, one can askwhether the “covariance inflation” scheme that is used as anad hoc adjustment in Kalman filtering (Anderson 2001) canreproduce theC values found to be optimal for synchroniza-tion. The formC=τ−1B(B+R)−1 is replaced by the ad-justed form

    C =1

    τ

    FBFB + R

    (39)

    Table 1. Covariance inflation factor vs. bimodality parametersd1, d2, for 50% model error in the resolved tendency.

    d1.75 1. 1.25 1.5 1.75 2.

    .75 1.26 1.26 1.28 1.30 1.32 1.341. 1.26 1.23 1.23 1.25 1.27 1.29

    d2 1.25 1.28 1.23 1.22 1.23 1.24 1.251.5 1.30 1.25 1.23 1.22 1.23 1.24

    1.75 1.32 1.27 1.24 1.23 1.23 1.232. 1.34 1.29 1.25 1.24 1.23 1.23

    whereF is the covariance inflation factor. For the exampledepicted in Fig.5b, the optimal valueC=1.5 would be gen-erated by an inflation factorF=1.2.

    The optimization problem was solved numerically with re-sults as plotted in Table1 for a range of values of the bi-modality parametersd1 andd2, giving dynamical parametersG=1/d2−1/d1 andH=−1/(d1d2). Results are displayedfor the case where the amplitude of model error in Eq. (32)is about 50% of the resolved tendencyẋT , with the resultingmodel error covarianceQ=0.04 approximately one-fourth ofthe background error covarianceB. The covariance inflationfactors are remarkably constant over a wide range of param-eters and agree with typical values used in operational prac-tice.

    For the range of bimodality parameters considered, thelarge stochastic model error makes little difference in theestimated covariance inflation factors, typically changingF only by about ±0.001. Indeed, for the linear case(d1=d2=∞), the optimal coupling is still 1/δBR−1, as for aperfect model. The stochastic model error results in an extraconstant termδQ on the RHS of the Fluctuation-DissipationRelation (20) that does not affect the derivatives in the sub-sequent optimization procedure. (However,Q does enter theprognostic equation (13) for B in the linear case, just as inKalman filtering.)

    As the inter-observation timeτ becomes smaller,B de-creases at the minimum point, and the form (39) implies adecreasing covariance inflation factor. For the caseτ=0.01shown in Fig.5d, the required factor is near unity. (Precisely,F=1.02.) That is, the coupling required for continuous-timeKalman filtering approaches the optimal coupling for syn-chronization. That the advantage conferred by the synchro-nization approach obtains only for sizable inter-observationtimes suggests some commonality with non-linear general-izations of Kalman filtering (e.g. Miller and Ghil, 1994).The difference between optimal synchronization and nonlin-ear Kalman filtering, while likely to be small, merits furtherinvestigation, since the two approaches are defined by dif-ferent goals. Similarly, the use of ensembles to representnon-Gaussian PDF’s in an empirical way (Anderson, 2003)

    www.nonlin-processes-geophys.net/13/601/2006/ Nonlin. Processes Geophys., 13, 601–612, 2006

  • 610 G. S. Duane et al.: Synchronicity in predictive modelling

    would seem to address some of the same issues treated in thissection, and a comparison is warranted.

    In the synchronization view, the values typically used forthe covariance inflation factor in the presence of model non-linearity and stochastic model error are readily explained.Sampling error, as it affects estimates of background covari-anceB, is not treated here, but might be expected to enter ina similar way and not to substantially alter the conclusions ofthe optimal coupling analysis.

    5 Concluding remarks: data assimilation as machineperception

    That ad hoc covariance inflation factors used in operationalpractice can be explained naturally in the synchronizationview of data assimilation suggests a deep relevance for thatviewpoint. Nothing in the foregoing discussion of synchro-nization and data assimilation is limited to meteorologicalprocesses. In any situation in which a computational predic-tive model of a physical process receives a stream of new dataas it is running, the synchronization of the physical processand the model is the true goal of any assimilation scheme.As suggested in the introduction, the philosophical basis forthe proposed use of synchronization appears to be the idea ofsynchronicity as espoused by Jung and Pauli, originally in apsychological context.

    While a weather-prediction model is not usually viewedas artificially intelligent software, it forms an internal rep-resentation of the external world as complex as that of anyrobot, albeit without the motor component. One could envi-sion augmenting the statistical data assimilation algorithmswith rules to discriminate between good and bad observa-tions, and with rules to transform the observed data in com-plex ways based on the current model state. Neurobiologicalparadigms may guide the design of such artificially intelli-gent predictive systems. It is known that interneuronal syn-chronization across wide distances in the brain plays a rolein the grouping of percepts that is a prerequisite to higherprocessing, and may even underlie consciousness (Strogatz,2003; Von Der Malsburg and Schneider, 1986). These find-ings lend credibility to the suggestion that a synchronizationprinciple is also fundamental to the relationship between thebrain and the external world, and that synchronization shouldbe a cornerstone in the design of a neuromorphic, artificiallyintelligent predictive model.

    Machine learning might also be realized in the synchro-nization context, so as to correct for deterministic modelerror in the resolved degrees of freedom. By allowingmodel parameters to vary slowly, generalized synchroniza-tion would be transformed to more nearly identical synchro-nization. Indeed, parameter adaptation laws can be added toa synchronously coupled pair of systems so as to synchronizethe parameters as well as the states. Parlitz (1996) showed forexample that two unidirectionally coupled Lorenz systems

    with different parameters:

    Ẋ = σ(Y −X)

    Ẏ = ρX − Y −XZ

    Ż = −βZ +XY

    (40)

    Ẋ1 = σ(Y −X1)

    Ẏ1 = ρ1X1 − νY1 −X1Z1 + µ

    Ż1 = −βZ1 +X1Y1

    could be augmented with parameter adaptation rules:

    ρ̇1 = (Y − Y1)X1

    ν̇ = (Y1 − Y )Y1 (41)

    µ̇ = Y − Y1

    so that the Lorenz systems would synchronize, and addition-ally ρ1→ρ, ν→1, andµ→0. Note that parameters ceaseto adapt when the systems are perfectly synchronized withY−Y1=0. Generalizations to PDEs would allow model pa-rameters to automatically adapt. In complex cases, a stochas-tic component (in the parameters) might be necessary to al-low parameters to jump among multiple basins of attraction,most of which are sub-optimal. The stochastic approachcould perhaps be extended to a genetic algorithm that wouldmake random qualitative changes in the model as well, untilsynchronization is achieved.

    The main competing approach to the tracking of realityby a predictive model is Kalman filtering, or generaliza-tions thereof that use Bayesian reasoning to estimate the cur-rent state. Further development of the optimal synchroniza-tion approaches to provide more refined modifications of theKalman filter in select regions of state space will be of inter-est in any situation where a Kalman filter is used to track ahighly nonlinear process.

    Conversely, unidirectional synchronization can always beviewed as a data assimilation problem, by regarding the slavesystem as a “model” of the master. The synchronizationproperties of a bidirectionally coupled system can often beinferred from the study of a corresponding unidirectionalconfiguration. The optimally modified Kalman filter that isneeded for data assimilation will therefore also be useful foroptimizing the synchronization of dynamical systems gener-ally.

    Appendix A

    Derivation of the fluctuation-dissipation relation forsynchronously coupled differential equations with noisein the coupling channel

    Consider the stochastic differential equation for synchroniza-tion error (17), rewritten as:

    de

    dt= (F − C)e + Cξ (A1)

    Nonlin. Processes Geophys., 13, 601–612, 2006 www.nonlin-processes-geophys.net/13/601/2006/

  • G. S. Duane et al.: Synchronicity in predictive modelling 611

    wheree is the synchronization error vector,F is a matrix rep-resenting the linearized dynamics,C is the coupling matrix,andξ is a time-dependent vector of white noise process sat-isfying=0 and

    < ξ(t)ξT (t ′) >= δRδ(t − t ′) (A2)

    whereR is the observation error covariance matrix andδ isthe time over which the physical noise decorrelates. The so-lution to Eq. (A1) is:

    e(t) = e(F−C)te(0)+∫ t

    0dt ′e(F−C)(t−t

    ′)Cξ(t ′) (A3)

    Thus the mean synchronization error is

    < e(t) >= e(F−C)te(0) (A4)

    and the synchronization error variance is

    < [e(t)− e(F−C)te(0)][e(t)− e(F−C)te(0)]T >=∫ t0dt ′

    ∫ t0dt ′′e(F−C)(t−t

    ′)CCT e(F−C)T (t−t ′′)

    (A5)

    or

    < e(t)eT (t) > = e(F−C)te(0)eT (0)e(F−C)T t

    +

    ∫ t0dt ′e(F−C)(t−t

    ′)δCRCT e(F−C)T (t−t ′)

    (A6)

    If C is chosen so that the system synchronizes, in the absenceof noise, ast→∞, then the first term on the right hand sidedof Eq. (A6) vanishes in this limit. The system with noiseapproaches a stationary state with=0 and

    < eeT >≡ B =∫

    0dte(F−C)tδCRCT e(F−C)

    T t (A7)

    Differentiating the integrand in Eq. (A7) with respect tot andand using Eq. (A7) to simplify the resulting expression, wefind

    (F − C)B + B(F − C)T

    =

    ∫∞

    0dtd

    dt[e(F−C)tδCRCT e(F−C)

    T t]

    = [e(F−C)tδCRCT e(F−C)T t

    ]

    ∣∣∣∞0

    (A8)

    The last expression in brackets vanishes at the upper limit forthe case of stable synchronization, so we have

    (F − C)B + B(F − C)T = −δCRCT (A9)

    which is the fluctuation-dissipation relation (20).

    Appendix B

    Optimal coupling for synchronization of discrete-time maps

    In Sects. 3.1 and 3.2, standard data assimilation was com-pared to optimal synchronization of differential equations byconsidering the continuous-time limit of the discrete analy-sis cycle. Instead, one can leave the analysis cycle intact andcompare it to a discrete-time version of optimal synchroniza-tion, i.e. to optimally synchronized maps.

    We begin by deriving a fluctuation-dissipation relation(FDR) for stochastic difference equations. Consider thestochastic difference equation with additive noise,

    x(n+1) = F x(n)+ ξ(n) < ξ(n)ξ(m)T >= R δn,m, (B1)

    wherex, ξ∈Rn, F, R aren×n matrices,F is assumed to bestable, andξ is Gaussian white noise. One can prove by in-duction that the solution to this equation, with initial condi-tion x(p), is

    x(n+ 1) = Fn+1−px(p)+n∑

    m=p

    Fm−p ξ(n+ p −m) (B2)

    We first wish to find the equilibrium covariance matrix0=. If the initial condition is in the infinite past thenthe equilibrium covariance is the covariance at any finite it-eration and it is convenient to choose iteration one. SinceFis stable the initial condition is forgotten and we obtain:

    x(1) =∞∑m=0

    Fmξ(−m), (B3)

    and then

    0 =

    ∞∑m=0

    FmRFmT

    . (B4)

    One can then show that0 satisfies the matrix FDR

    F 0 FT − 0 + R = 0. (B5)

    Now consider a model that takes the analysis at stepn toa new background at stepn+ 1, given by a linear matrixM .That is,xB(n + 1)=MxA(n). Also, xT (n + 1)=MxT (n).SincexA(n)=xB(n)+B(B+R)−1(xobs(n)−xB(n)), wherexobs=xT+ξ , we derive a difference equation fore≡xB−xT :

    e(n+ 1)=M(I−B(B+R)−1)e(n)+MB(B+R)−1ξ . (B6)

    For synchronously coupled maps, on the other hand, wehave

    e(n+ 1) = (M − C)e(n+ 1)+ Cξ , (B7)

    and with the FDR as derived above:

    (M − C)B(M − C)T − B + CRCT = 0 (B8)

    www.nonlin-processes-geophys.net/13/601/2006/ Nonlin. Processes Geophys., 13, 601–612, 2006

  • 612 G. S. Duane et al.: Synchronicity in predictive modelling

    Differentiating the matrix equation (B8) with respect to theelements ofC, as in the continuous-time analysis, we find

    0 = (M − C)dB(M − C)T + (−dC)B(M − C)T

    + (M − C)B(−dC)T − dB + dCRCT + CRdCT . (B9)

    We seek a matrixC for which dB=0 for arbitrarydC, andthus

    (−dC)[B(M − C)T − RCT ]

    +[(M − C)B − CR](−dC)T = 0 (B10)

    for arbitrarydC. The two terms are transposes of one an-other, and it is easily shown, as in the continuous-time case,that the quantities in brackets must vanish. This gives theoptimal matrix

    C = MB(B + R)−1 (B11)

    which upon substitution in Eq. (B7) reproduces the standarddata assimilation form (B6), confirming the equivalence.

    Acknowledgements.The authors thank S.-C. Yang for providingsoftware used as a basis for the alternating Lorenz system experi-ment. The National Center for Atmospheric Research is sponsoredby the National Science Foundation. This work was supportedunder NSF Grant 0327929.

    Edited by: M. ThielReviewed by: two referees

    References

    Afraimovich, V. S., Verichev, N. N., and Rabinovich, M. I.:Stochastic synchronization of oscillation in dissipative systems,Radiophys. Quantum Electron., 29, 795–803, 1986.

    Anderson, J. L.: An ensemble adjustment Kalman filter for dataassimilation, Mon. Wea. Rev., 129, 2884–2903, 2001.

    Anderson, J. L.: A local least-squares framework for ensemble fil-tering, Mon. Wea. Rev., 131, 634–642, 2003.

    Corazza, M., Kalnay, E., Patil, D. J., Yang, S.-C., Morss, R., Cai,M., Szunyogh, I., Hunt, B. R., and Yorke, J. A.: Use of the breed-ing technique to estimate the structure of the analysis “errors ofthe day”, Nonlin. Processes Geophys., 10, 233–243, 2003,http://www.nonlin-processes-geophys.net/10/233/2003/.

    Ashwin, P., Buescu, J., and Stewart, I.: Bubbling of attractors andsynchronisation of chaotic oscillators, Phys. Lett. A, 193, 126–139, 1994.

    Baek, S. J., Hunt, B. R., Szunyogh, I., Zimin, A., and Ott, E.: Local-ized error bursts in estimating the state of spatiotemporal chaos,Chaos, 14, 1042–1049, 2004.

    Corazza, M., Kalnay, E., Patil, D. J., Yang, S.-C., Morss, R., Cai,M., Szunyogh, I., Hunt, B. R., and Yorke, J. A.: Use of the breed-ing technique to estimate the structure of the analysis “errors ofthe day”, Nonlin. Processes Geophys., 10, 233–243, 2003,http://www.nonlin-processes-geophys.net/10/233/2003/.

    Duane, G. S.: Synchronized chaos in extended systems and meteo-rological teleconnections, Phys. Rev. E, 56, 6475–6493, 1997.

    Duane, G. S.: Synchronized chaos in climate dynamics, in: Proc.7th Experimental Chaos Conference, edited by: In, V., Ko-carev, L., Carroll, T. L., et al., AIP Conference Proceedings 676,Melville, New York, 115–126, 2003.

    Duane, G. S. and Tribbia, J. J.: Synchronized chaos in geophysicalfluid dynamics, Phys. Rev. Lett., 86, 4298–4301, 2001.

    Duane, G. S. and Tribbia, J. J.: Weak Atlantic-Pacific telecon-nections as synchronized chaos, J. Atmos. Sci., 61, 2149–2168,2004.

    Evensen, G.: Sequential data assimilation with a non-linear quasi-geostrophic model using Monte Carlo methods to forecast errorstatistics, J. Geophys. Res., 99, 10 143–10 162, 1994.

    Fujisaka, H. and Yamada, T.: Stability theory of synchronized mo-tion in coupled-oscillator systems, Prog. Theor. Phys., 69, 32–47,1983.

    Jung, C. G. and Pauli, W.: The interpretation of nature and the psy-che, Pantheon, New York, 1955.

    Kocarev, L., Tasev, Z., and Parlitz, U.: Synchronizing spatiotem-poral chaos of partial differential equations, Phys. Rev. Lett., 79,51–54, 1997.

    Lorenz, E. N.: Deterministic nonperiodic flows, J. Atmos. Sci., 20,130–141, 1963.

    Lorenz, E. N.: Irregularity – a fundamental property of the atmo-sphere, Tellus A, 36, 98–110, 1984.

    Miller, R. N. and Ghil, M.: Advanced data assimilation in stronglynonlinear dynamical systems, J. Atmos. Sci., 51, 1037–1056,1994.

    Parlitz, U.: Estimating model parameters from time series by au-tosynchronization, Phys. Rev. Lett., 76, 1232–1235, 1996.

    Patil, D. J., Hunt, B. R., Kalnay, E., Yorke, J. A., and Ott, E.: Locallow dimensionality of atmospheric dynamics, Phys. Rev. Lett.,86, 5878–5881, 2001.

    Pecora, L. M. and Carroll, T. L.: Synchronization in chaotic sys-tems, Phys. Rev. Lett., 64, 821–824, 1990.

    Pecora, L. M., Carroll, T. L., Johnson, G. A., Mar, D. J., and Heagy,J. F.: Fundamentals of synchronization in chaotic systems, con-cepts, and applications, Chaos, 7, 520–543, 1997.

    Rulkov, N. F., Sushchik, M. M., and Tsimring, L. S.: Generalizedsynchronization of chaos in directionally coupled chaotic sys-tems, Phys. Rev. E, 51, 980–994, 1995.

    So, P., Ott, E., and Dayawansa, W. P.: Observing chaos – deducingand tracking the state of a chaotic system from limited observa-tion, Phys. Rev. E, 49, 2650–2660, 1994.

    Strogatz, S. H.: Sync: The Emerging Science of Spontaneous Or-der, Theia, New York, 338 pp., 2003.

    Toth, Z. and Kalnay, E.: Ensemble forecasting at NMC – the gener-ation of perturbations, Bull. Amer. Meteor. Soc., 74, 2317–2330,1993.

    Toth, Z. and Kalnay, E.: Ensemble forecasting at NCEP and thebreeding method, Mon. Wea. Rev., 125, 3297–3319, 1997.

    Vautard, R., Legras, B., and Déqúe, M.: On the source of midlat-itude low-frequency variability. Part I: A statistical approach topersistence, J. Atmos. Sci., 45, 2811–2843, 1988.

    Von Der Malsburg, C. and Schneider, W.: A neural cocktail-partyprocessor, Biol. Cybernetics, 54, 29–40, 1986.

    Yang, S.-C., Baker, D., Cordes, K., Huff, M., Nagpal, G., Okereke,E., Villafañe, J., and Duane, G. S.: Data assimilation as synchro-nization of truth and model: Experiments with the three-variableLorenz system, J. Atmos. Sci., 63, 2340–2354, 2006.

    Nonlin. Processes Geophys., 13, 601–612, 2006 www.nonlin-processes-geophys.net/13/601/2006/

    http://www.nonlin-processes-geophys.net/10/233/2003/http://www.nonlin-processes-geophys.net/10/233/2003/