Spectral Differencing with a Twist

55
Johann Radon Institute for Computational and Applied Mathematics Linz, Österreich 15. März 2004

description

Spectral Differencing with a Twist. Johann Radon Institute for Computational and Applied Mathematics Linz, Österreich 15. März 2004. Spectral Differencing with a Twist. Johann Radon Institute for Computational and Applied Mathematics Linz, Österreich 15. März 2004. - PowerPoint PPT Presentation

Transcript of Spectral Differencing with a Twist

Johann Radon Institute for

Computational and

Applied Mathematics

Linz, Österreich

15. März 2004

Johann Radon Institute for

Computational and

Applied Mathematics

Linz, Österreich

15. März 2004

[email protected] www.math.sfu.ca/~mrt

Richard BaltenspergerUniversité Fribourg

Manfred TrummerPacific Institute for the

Mathematical Sciences &Simon Fraser University

Johann Radon Institute for

Computational and

Applied Mathematics

Linz, Österreich

15. März 2004

Pacific Institute for the Mathematical Sciences

www.pims.math.ca

Simon Fraser University

Banff International Research Station

PIMS – MSRI collaboration (with MITACS)

5-day workshops 2-day workshops Focused Research Research In Teams

www.pims.math.ca/birs

Round-off ErrorsRound-off Errors

• 1991 – a patriot missile battery in 1991 – a patriot missile battery in Dhahran, Saudi Arabia, fails to intercept Dhahran, Saudi Arabia, fails to intercept Iraqi missile resulting in 28 fatalities. Iraqi missile resulting in 28 fatalities. Problem traced back to accumulation of Problem traced back to accumulation of round-off error in computer arithmetic.round-off error in computer arithmetic.

IntroductionIntroduction

Spectral Differentiation:

• Approximate function f(x) by a Approximate function f(x) by a globalglobal interpolant interpolant

• Differentiate the global Differentiate the global interpolant interpolant exactlyexactly, e.g.,, e.g.,

Features of spectral Features of spectral methodsmethods+ Exponential (spectral) accuracy, + Exponential (spectral) accuracy,

converges faster than any power of converges faster than any power of 1/N1/N

+ Fairly coarse discretizations give + Fairly coarse discretizations give good resultsgood results

- Full instead of sparse matrices- Full instead of sparse matrices

- Tighter stability restrictions- Tighter stability restrictions

- Less robust, difficulties with - Less robust, difficulties with irregular domainsirregular domains

Types of Spectral MethodsTypes of Spectral Methods

• Spectral-GalerkinSpectral-Galerkin: : work in work in transformed space, that is, with the transformed space, that is, with the coefficients in the expansion.coefficients in the expansion.

Example:Example: uutt = u = uxxxx

Types of Spectral Types of Spectral MethodsMethods

•Spectral CollocationSpectral Collocation (Pseudospectral): (Pseudospectral): work in physical work in physical space. Choose collocation points xspace. Choose collocation points xkk, , k=0,..,N, and approximate the k=0,..,N, and approximate the function of interest by its values at function of interest by its values at those collocation points. those collocation points.

Computations rely on interpolation.Computations rely on interpolation.

• Issues: Aliasing, ease of computation, Issues: Aliasing, ease of computation, nonlinearitiesnonlinearities

InterpolationInterpolation• Periodic function, equidistant Periodic function, equidistant

points: FOURIERpoints: FOURIER

• Polynomial interpolation: Polynomial interpolation:

• Interpolation by Rational functionsInterpolation by Rational functions

•Chebyshev points

•Legendre points

•Hermite, Laguerre

Differentiation Matrix Differentiation Matrix DD

Discrete data set Discrete data set ffkk, , k=0,…,Nk=0,…,N

Interpolate between collocation Interpolate between collocation points points xxkk

p(xp(xkk) = f) = fkk

Differentiate Differentiate p(x)p(x)

Evaluate Evaluate p’(xp’(xkk) = g) = gkk

All operations are linear: All operations are linear: g = Dfg = Df

SoftwareSoftware• FunaroFunaro: FORTRAN code, various : FORTRAN code, various

polynomial spectral methodspolynomial spectral methods

• Don-Solomonoff, Don-CostaDon-Solomonoff, Don-Costa: : PSEUDOPAK FORTRAN code, more PSEUDOPAK FORTRAN code, more engineering oriented, includes engineering oriented, includes filters, etc.filters, etc.

• Weideman-ReddyWeideman-Reddy: Based on : Based on Differentiation Matrices, written in Differentiation Matrices, written in MATLAB (fast Matlab programming)MATLAB (fast Matlab programming)

Polynomial InterpolationPolynomial Interpolation

Lagrange form: “Although admired Lagrange form: “Although admired for its mathematical beauty and for its mathematical beauty and elegance it is not useful in elegance it is not useful in practice”practice”

•“Expensive”

•“Difficult to

update”

Barycentric formula, version Barycentric formula, version 11

Barycentric formula, version Barycentric formula, version 22 =1

Set-up: O(N2)

Evaluation: O(N)

Update (add point): O(N)

New fk values: no extra work!

Barycentric formula: Barycentric formula: weights wweights wkk

=0

Barycentric formula: Barycentric formula: weights wweights wkk

Equidistant points:

Chebyshev points (1st kind):

Chebyshev points (2nd kind):

Barycentric formula: Barycentric formula: weights wweights wkk

Weights can be multiplied by the same constant

This function interpolates for any weights ! Rational interpolation!

Relative size of weights indicates ill-conditioning

Computation of the Computation of the Differentiation MatrixDifferentiation MatrixEntirely based upon interpolation.Entirely based upon interpolation.

Barycentric FormulaBarycentric Formula

Barycentric Barycentric (Schneider/Werner):(Schneider/Werner):

Chebyshev DifferentiationChebyshev Differentiation

Differentiation Differentiation Matrix:Matrix:

xk = cos(k/N)

Chebyshev Matrix Chebyshev Matrix has has Behavioural Behavioural ProblemsProblems• Trefethen-Trummer, 1987 Trefethen-Trummer, 1987 • Rothman, 1991Rothman, 1991• Breuer-Everson, 1992Breuer-Everson, 1992• Don-Solomonoff, 1995/1997Don-Solomonoff, 1995/1997• Bayliss-Class-Matkowsky, 1995Bayliss-Class-Matkowsky, 1995• Tang-Trummer, 1996Tang-Trummer, 1996• Baltensperger-Berrut, 1996Baltensperger-Berrut, 1996

numericnumericalal

Chebyshev Matrix and Chebyshev Matrix and ErrorsErrors

Matrix

Absolute Errors

Relative Errors

Round-off error analysisRound-off error analysis

has relative error

and so has

, therefore

Roundoff Error Analysis Roundoff Error Analysis

• With “good” computation we expect With “good” computation we expect an error in Dan error in D01 01 of O(Nof O(N22))

• Most elements in D are O(1)Most elements in D are O(1)

• Some are of O(N), and a few are O(NSome are of O(N), and a few are O(N22))

• We must be careful to see whether We must be careful to see whether absolute or relative errors enter the absolute or relative errors enter the computationcomputation

RemediesRemedies

• Preconditioning, add ax+b to f to Preconditioning, add ax+b to f to create a function which is zero at the create a function which is zero at the boundaryboundary

• Compute D in higher precisionCompute D in higher precision

• Use trigonometric identitiesUse trigonometric identities

• Use symmetry: Flipping TrickUse symmetry: Flipping Trick

• NST: Negative sum trickNST: Negative sum trick

More ways to compute DfMore ways to compute Df

• FFT based approachFFT based approach

• Schneider-Werner formulaSchneider-Werner formula

If we only want Df but not the matrix D (e.g., time stepping, iterative methods), we can compute Df for any f via

Chebyshev Differentiation Chebyshev Differentiation MatrixMatrix

““Original Formulas”:Original Formulas”:

CancellationCancellation!

xk = cos(k/N)

Trigonometric IdentitiesTrigonometric Identities

Flipping TrickFlipping Trick

Use

and “flip” the upper half of D into the lower half, or the upper left triangle into the lower right triangle.

sin( – x) not as accurate as sin(x)

NST: Negative sum trickNST: Negative sum trick

Spectral Differentiation is exact for constant functions:

Arrange order of summation to sum the smaller elements first - requires sorting

Numerical ExampleNumerical Example

Numerical ExampleNumerical Example

ObservationsObservations

•Original formula for D is very inaccurate

•Trig/Flip + “NST” (Weideman-Reddy) provides good improvement

•FFT not as good as “expected”, in particular when N is not a power of 2

•NST applied to original D gives best matrix

•Even more accurate ways to compute Df

Machine DependencyMachine Dependency

•Results can vary substantially from machine to machine, and may depend on software.

•Intel/PC: FFT performs better

•SGI

•SUN

•DEC Alpha

Understanding theUnderstanding theNegative sum trickNegative sum trick

Understanding theUnderstanding theNegative sum trickNegative sum trick

Error in D

Error in f

Understanding NSTUnderstanding NST22

N

kjjkjkk DD

,0

Understanding NSTUnderstanding NST33

N

kjjkjkk DD

,0

The inaccurate matrix elements are multiplied by very small numbers leading to O(N2) errors -- optimal accuracy

Understanding the Understanding the Negative sum trickNegative sum trick

N

kjjkjkk DD

,0

NST is an (inaccurate) implementation of the Schneider-Werner formula:

Schneider-Werner Negative Sum Trick

Understanding the Understanding the Negative sum trickNegative sum trick

N

kjjkjkk DD

,0

•Why do we obtain superior results when applying the NST to the original (inaccurate) formula?

•Accuracy of Finite Difference Quotients:

Finite Difference QuotientsFinite Difference Quotients

•For monomials a cancellation of the cancellation errors takes place, e.g.:

•Typically fj – fk is less accurate than xj – xk, so computing xj - xk more accurately does not help!

Finite Difference QuotientsFinite Difference Quotients

Finite Difference QuotientsFinite Difference Quotients

Fast Schneider-WernerFast Schneider-Werner

•Cost of Df is 2N2, SW costs 3N2

•Can implement Df with “Fast SW” method

D f

SW

SW

Size of each corner blocks is N½

Cost: 2N2 + O(N)

Polynomial Polynomial DifferentiationDifferentiation

• For example, Legendre, Hermite, LaguerreFor example, Legendre, Hermite, Laguerre

• Fewer tricks available, but Negative Sum Fewer tricks available, but Negative Sum Trick still provides improvementsTrick still provides improvements

• Ordering the summation may become even Ordering the summation may become even more importantmore important

Higher Order DerivativesHigher Order Derivatives

• Best not to compute DBest not to compute D(2)(2)=D=D22, etc., etc.

• Formulas by Welfert (implemented in Weideman-Reddy)Formulas by Welfert (implemented in Weideman-Reddy)

• Negative sum trick shows again improvementsNegative sum trick shows again improvements

• Higher order differentiation matrices badly conditioned, Higher order differentiation matrices badly conditioned, so gaining a little more accuracy is more important than so gaining a little more accuracy is more important than for first orderfor first order

Using D to solve Using D to solve problemsproblems

• In many applications the first and last row/column In many applications the first and last row/column of D is removed because of boundary conditionsof D is removed because of boundary conditions

• f f ->-> Df appears to be most sensitive to how D is Df appears to be most sensitive to how D is computed (forward problem = differentiation)computed (forward problem = differentiation)

• Have observed improvements in solving BVPsHave observed improvements in solving BVPs

Skip

Solving a Singular BVPSolving a Singular BVP

ResultsResults

CloseClose

• Demystified some of the less intuitive Demystified some of the less intuitive behaviour of differentiation matricesbehaviour of differentiation matrices

• Get more accuracy for the same cost Get more accuracy for the same cost • Study the effects of using the various Study the effects of using the various

differentiation matrices in applications differentiation matrices in applications • Forward problem is more sensitive Forward problem is more sensitive

than inverse problem than inverse problem • Df: Time-stepping, Iterative methodsDf: Time-stepping, Iterative methods

To think aboutTo think about

• Is double precision enough as we are Is double precision enough as we are able to solve “bigger” problems? able to solve “bigger” problems?

• Irony of spectral methods: Irony of spectral methods: Exponential convergence, round-off Exponential convergence, round-off error is the limiting factorerror is the limiting factor

• Accuracy requirements limit us to N Accuracy requirements limit us to N of moderate size -- FFT is not so of moderate size -- FFT is not so much faster than matrix based much faster than matrix based approachapproach

And now for the twist….And now for the twist….