Lecture 005

download Lecture 005

of 17

Transcript of Lecture 005

  • 7/29/2019 Lecture 005

    1/17

    Polynomial Approximation by LeastSquares using OrthogonalPolynomials

    Some of the difficulties in polynomial approximation come from the unsuitability of the monomial form. Moreeffective ways can be found by relying on the concept of orthogonal polynomials.

    Definition 0.0. Orthogonal Polynomials

    A sequence of polynomials p0 , p1, , pn , is said to be orthogonal over the interval a , b with respect to theweight function w x if

    abw x p i x p j x x = 0,

    for all i j. The sequence is said to be orthogonal if

    abw x p i x p i x x = c,

    for all i and c R .

    Orthogonal polynomials are a major topic in approximation theory and many important results have been devel-oped. Here we list only a few of the many types of orthogonal polynomials that have been studied, with recipes forcomputing the individual terms. As we will see, orthogonal polynomials are quite important in numerical analysis.Constructing algorithms with orthogonal polynomials often requires a knowledge of their special properties.

    1.0.0.1 Legendre Polynomials

    The Legendre polynomials P n x are orthogonal with respect to w x = 1 over - 1, 1 . Legendre polynomials areuseful if we have to deal with data or functions which are defined on a finite interval. The first few Legendrepolynomials are tabulated in the following table

    n P n x0 11 x

    2 12

    3 x2 - 1

    31

    2

    5 x3 - 3 x

    4 18

    35 x4 - 30 x2 + 3

    51

    863 x5 - 70 x3 + 15 x

    Successive Legendre polynomials can be generated by the three-term recurrence formula

    P n+ 1 x =2 n + 1

    n + 1 x Pn x -

    n

    n + 1P n- 1 x .

  • 7/29/2019 Lecture 005

    2/17

    These polynomials are orthogonal, but not orthonormal, so formula (0.0) will not be normalized to one. However,they can be normalized by multiplying with a suitable constant. The following Figure 0.0 shows the first 5 Legen-dere polynomials.

    - 1.0 - 0.5 0.0 0.5 1.0- 1.0

    - 0.5

    0.0

    0.5

    1.0

    x

    P n

    x

    Figure 0.0. The first 5 Legendre polynomials are shown on the interval x - 1, 1 .

    We can verify the orthogonality of the polynomials by determining the matrix

    a i, j = - 11

    P i x P j x x

    of the integrals with indices i, j specifying the order of the respective Legendre polynomials. The matrix up toorder 5 5 is shown in the next line

    leg1 = ParallelTable - 1

    1

    P i x P j x x, i, 1, 5 , j, 1, 5 ; MatrixForm leg1

    2

    30 0 0 0

    02

    50 0 0

    0 02

    70 0

    0 0 02

    90

    0 0 0 02

    11

    It is obvious that the off diagonal elements vanish and that the diagonal elements i = j show a finite value differentfrom 1.

    1.0.0.2 Chebyshev Polynomials of the First Kind

    The Chebyshev polynomials of the first kind T n x are orthogonal with respect to w x = 1 1 - x2 over - 1, 1 .

    Chebyshev polynomials are also useful to find approximations in a finite interval. The first few Chebyshev polyno-mials of the first kind are listed in the following table

    2 | Lecture_005.nb 2012 Dr. G. Baumann

  • 7/29/2019 Lecture 005

    3/17

    n T n x0 11 x2 2 x2 - 13 4 x3 - 3 x4 8 x4 - 8 x2 + 15 16 x5 - 20 x3 + 5 x

    Successive Chebyshev polynomials of the first kind can be generated by the three-term recurrence formula

    T n+1 x = 2 x T n x - T n- 1 x .

    Chebyshev polynomials of the first kind have many properties that make them attractive for numerical work. Oneof these is the identity

    T n x = cos n arcos x .

    In spite of its appearance, the right side of (0.0) is a polynomial. From (0.0) it follows the important fact that the

    roots of T n are explicitly given as

    zi = cos2 i + 1 p

    2 n, i = 0, 1, 2, , n - 1.

    The following Figure 0.0 shows the first 5 Chebyshev polynomials of the first kind.

    - 1.0 - 0.5 0.0 0.5 1.0- 1.0

    - 0.5

    0.0

    0.5

    1.0

    x

    T n

    x

    Figure 0.0. The first 5 Chebyshev polynomials of first kind are shown on the interval x - 1, 1 .

    The orthogonality of the polynomials can be verified by determining the matrix

    a i, j = - 11 T i x T j x

    1 - x2 x

    of the integrals with indices i, j specifying the order of the respective Chebyshev polynomials. The matrix of order 5 5 is shown in the next line

    2012 Dr. G. Baumann Lecture_005.nb | 3

  • 7/29/2019 Lecture 005

    4/17

    leg1 = ParallelTable - 1

    1 T i x T j x

    1 - x2 x, i, 1, 5 , j, 1, 5 ; MatrixForm leg1

    p

    20 0 0 0

    0p

    20 0 0

    0 0p

    20 0

    0 0 0p

    20

    0 0 0 0p

    2

    Obviously the off diagonal elements vanish and that the diagonal elements i = j show a finite value p 2 differentfrom 1. Contrary to the Legendre polynomials the diagonal elements are all the same for different polynomialorders.

    1.0.0.3 Chebyshev Polynomials of the Second Kind

    The Chebyshev polynomials of the second kind U n x are orthogonal with respect to w x = 1 - x2 over - 1, 1 .

    The Chebyshev polynomials of the first kind are related to those of the second kind by

    T n' x = n U n- 1 x .

    An explicit form for Chebyshev polynomials of the second kind is

    U n x =sin n + 1 arcos x

    1 - x2.

    The first few Chebyshev polynomials of second kind are

    n U n0 11 2 x2 4 x2 - 13 8 x3 - 4 x4 16 x4 - 12 x2 + 15 32 x5 - 32 x3 + 6 x

    The following Figure 0.0 shows the first 5 Chebyshev polynomials of the first kind.

    4 | Lecture_005.nb 2012 Dr. G. Baumann

  • 7/29/2019 Lecture 005

    5/17

    - 1.0 - 0.5 0.0 0.5 1.0- 6

    - 4

    - 2

    0

    2

    4

    6

    x

    U n

    x

    Figure 0.0. The first 5 Chebyshev polynomials of the second kind U n are shown on the interval x - 1, 1 .

    The orthogonality of the polynomials can be verified by determining the matrix

    a i, j = - 11U i x U j x 1 - x2 x

    of the integrals with indices i, j specifying the order of the respective Chebyshev polynomials. The matrix of order 5 5 is shown in the next line

    leg1 = ParallelTable - 1

    1

    1 - x2 U i x U j x x, i, 1, 5 , j, 1, 5 ; MatrixForm leg1

    p

    20 0 0 0

    0p

    20 0 0

    0 0p

    20 0

    0 0 0p

    20

    0 0 0 0p

    2

    Again the off diagonal elements vanish and the diagonal elements i = j show a finite value p 2 different from 1.Contrary to the Legendre polynomials the diagonal elements have all the same finite value for all orders of thepolynomial.

    1.0.0.4 Laguerre Polynomials

    These polynomials are orthogonal with respect to w x = - x over the interval 0, . Laguerre polynomials are

    polynomials which are useful in approximating functions in semi infinite intervals. The Laguerre polynomials canbe generated from the three-term recurrence formula

    Ln+ 1 x =1

    n + 12 n + 1 - x Ln x - n Ln- 1 x

    starting with L0 x = 1 and L1 x = 1 - x. The first few Laguerre polynomials are listed in the following table

    2012 Dr. G. Baumann Lecture_005.nb | 5

  • 7/29/2019 Lecture 005

    6/17

    n Ln0 11 1 - x

    21

    2 x2 - 4 x + 2

    31

    6- x3 + 9 x2 - 18 x + 6

    4 124

    x4 - 16 x3 + 72 x2 - 96 x + 24

    51

    120- x5 + 25 x4 - 200 x3 + 600 x2 - 600 x + 120

    61

    720 x6 - 36 x5 + 450 x4 - 2400 x3 + 5400 x2 - 4320 x + 720

    Orthogonality of Laguerre's polynomials are checked by verifying the integrals

    a i, j = 0

    Li x L j x - x x

    for different polynomial orders i and j. The matrix elements with indices i, j specifying the order of the respec-

    tive Laguerre polynomials are calculated for a matrix of order 5 5 is shown in the next line

    leg1 = ParallelTable 0

    - x Li x L j x x, i, 1, 5 , j, 1, 5 ; MatrixForm leg1

    1 0 0 0 00 1 0 0 00 0 1 0 00 0 0 1 00 0 0 0 1

    The off diagonal elements vanish and the diagonal elements i = j are equal to 1. This means that Laguerre polynomi-als are orthogonal and normalized to one.

    A series of Laguerre polynomials with different polynomial order is shown in the following Figure 0.0.

    0 2 4 6 8 10

    - 15

    - 10

    - 5

    0

    5

    10

    15

    x

    L n

    x

    Figure 0.0. The first 5 Laguerre polynomials are shown on the interval x 0, 10 .

    6 | Lecture_005.nb 2012 Dr. G. Baumann

  • 7/29/2019 Lecture 005

    7/17

    1.0.0.5 Hermite Polynomials

    The weight w x = - x2

    over - , leads to the Hermite polynomials. With

    n H n0 11 2 x2 4 x2 - 23 8 x3 - 12 x4 16 x4 - 48 x2 + 125 32 x5 - 160 x3 + 120 x

    successive Hermite polynomials can be generated by

    H n+ 1 x = 2 x H n x - 2 n H n- 1 x .

    Orthogonality of Hermite's polynomials are checked by verifying the integrals

    a i, j = - H i x H j x - x2

    x

    for different polynomial orders i and j. The matrix elements with indices i, j specifying the order of the respec-tive Hermite polynomials are calculated for a matrix of order 5 5 is shown in the next line

    leg1 = ParallelTable -

    - x2 H i x H j x x, i, 1, 5 , j, 1, 5 ; MatrixForm leg1

    2 p 0 0 0 0

    0 8 p 0 0 0

    0 0 48 p 0 0

    0 0 0 384 p 00 0 0 0 3840 p

    The off diagonal elements vanish and the diagonal elements i = j are equal to a finite value which increases withthe order of the polynomial. This means that Hermite polynomials are orthogonal but not normalized.

    A series of Hermite polynomials with different polynomial order is shown in the following Figure 0.0.

    2012 Dr. G. Baumann Lecture_005.nb | 7

  • 7/29/2019 Lecture 005

    8/17

    - 3 - 2 - 1 0 1 2 3- 100

    - 50

    0

    50

    100

    x

    H n

    x

    Figure 0.0. The first 5 Hermite polynomials are shown on the interval x - 3, 3 .

    Orthogonal polynomials have many uses, but here we are interested only in orthogonal polynomials for the solu-tion of the least square problem. For simplicity, assume that a = - 1 and b = 1. Instead of the standard monomialform, we write the approximation as an expansion in Legendre polynomials

    pn x =i= 0

    n

    a i P i x .

    We substitute this into the least square approximation (0.XXX) and repeat the process leading to (0.XXX) to find

    cij = - 11

    P i x P j x x.

    Because of the orthogonality of the Legendre polynomials, the matrix in (0.XXX) is diagonal, making it well-conditioned and leading to the immediate solution

    a i = - 11 f x P i x x

    - 11 P i2 x x.

    Example 0.0. Polynomial Approximation

    Approximate f x = x2

    on - 1, 1 by a polynomial of degree two, using a least square method.

    Solution 0.13. Writing the solution as

    p2 x = a 0 P 0 x + a 1 P 1 x + a 2 P2 x ,

    we can determine the coefficients by (0.0) as

    a0 = - 1

    1

    x2

    P0 x x

    - 11 P0 x P0 x xF 1

    For the first polynomial order we get

    8 | Lecture_005.nb 2012 Dr. G. Baumann

  • 7/29/2019 Lecture 005

    9/17

    a1 = - 11 x2 P1 x x - 11 P1 x P1 x x

    0

    The second order Legendre polynomial delivers

    a2 = - 11 x2 P2 x x - 11 P2 x P2 x x

    5

    4 3 - 5 F 1

    The result obtained this way is

    p2 = a0 P0 x + a1 P1 x + a2 P2 x

    5

    8 3 - 5 F 1 3 x2 - 1 + F 1

    If we convert this symbolic formula to numeric values we find

    Simplify N p2

    1.57798 x2 + 0.93666

    The approximation and the original function are shown in the following Figure 0.0

    - 1.0 - 0.5 0.0 0.5 1.01.0

    1.5

    2.0

    2.5

    x

    e x 2

    , p 2

    Figure 0.0. Interpolation of the function f = x2

    in the interval [-1,1] by using a Legendre polynomial approxima-tion.

    It is worthwhile to note that the approximation by orthogonal polynomials gives in theory the same result as(0.XXX). But because the coefficients of the Legendre polynomials are integers, there is no rounding in this step

    2012 Dr. G. Baumann Lecture_005.nb | 9

  • 7/29/2019 Lecture 005

    10/17

    and the whole process is much more stable. Least square approximation by polynomials is a good example of thestabilizing effect of a rearrangement of the computation.

    We can generalize the least square process by looking at the weighted error of the approximation

    f - pn w

    2 =

    ab

    w x

    f x - p

    n x 2 x,

    where the weight function w x is positive throughout a , b . This weighs pronounce the error more heavily wherew x is large, something that is often practically justified. If we know the polynomials f i x that are orthogonalwith respect to this weight function, we write

    pn x =i= 0

    n

    a i f i x ,

    and, repeating the steps leading to (0.XXX), we find that

    cij = ab

    w x f i x f j x x.

    The orthogonality condition then makes the matrix in (0.XXX) diagonal and we get the result that

    a i = abw x f i x f x x abw x f i2 x x

    .

    Example 0.0. Weighted Approximation

    Find a second degree polynomial approximation to f x = x2

    on - 1, 1 , using a weighted least square approxima-

    tion with w x = 1 1 - x2 .

    Solution 0.14. Here we use the first kind of Chebyshev polynomials as an example. Using symmetry and (0.0) wefind that the approximation is

    p2 x = a 0 + a 1 T 1 x + a 2 T 2 x ,where the coefficients a 0, a 1, and a 2 follow by the integral ratios

    a0 = - 11 x

    2T 0 x

    1- x2 x

    - 11 T 0 x T 0 x1- x2

    x

    I 01

    2

    where I 0 x is Bessel's function of order zero. The expansion coefficient a 1 is

    a1 = - 11 x

    2T 1 x

    1- x2 x

    - 11 T 1 x T 1 x1- x2

    x

    0

    10 | Lecture_005.nb 2012 Dr. G. Baumann

  • 7/29/2019 Lecture 005

    11/17

    and the third coefficient is

    a2 = - 11 x

    2T 2 x

    1- x2 x

    - 11 T 2 x T 2 x

    1- x2 x

    2 I 11

    2

    here again I 1 x is Bessel's function. The total approximation is generated by these results by

    p2 = a0 + a1 T 1 x + a2 T 2 x

    2 2 x2 - 1 I 11

    2+ I 0

    1

    2

    The numerical representation is gained by

    Simplify N p2

    1.70078 x2 + 0.902996

    The approximation of the function is graphically shown in the next plot

    - 1.0 - 0.5 0.0 0.5 1.0

    1.0

    1.5

    2.0

    2.5

    x

    e x 2

    , p 2

    Figure 0.0. Interpolation of the function f = x2

    in the interval [-1,1] by using a weighted Chebyshev polynomialapproximation.

    Compared with the solution by Legendre polynomials the errors at the end points are much smaller but at thecenter the deviation is larger.

    An advantage of least square methods over interpolation is that they converge under fairly general conditions. Theproof of this is straightforward.

    Theorem 0.0. Convergence in the Mean

    Let pn x denote the nth degree least squares polynomial approximation to a function f , defined on a finite interval

    2012 Dr. G. Baumann Lecture_005.nb | 11

  • 7/29/2019 Lecture 005

    12/17

    a , b . Then

    lim n f - pn 2 = 0.

    We call this behavior convergence in the mean.

    Proof 0.3. Given any e > 0, the Weierstrass theorem tells us that there exists some polynomial pn such that

    max pn x - f x e .This implies that

    ab

    f x - pn2 x b - a e2.

    But then, because pn minimizes the square of the deviation, we must have

    ab

    f x - pn 2 x b - a e2

    and convergence follows.

    QED

    Under suitable assumptions, almost identical arguments lead to the convergence of the weighted least squaremethod. But note that the arguments only prove convergence in the mean and do not imply that the error is small atevery point of the interval.

    Sinc Approximations of functionsThe Sinc function, also called the sampling function, is a function that arises frequently in signal processing andthe theory of Fourier transforms. The full name of the function is sine cardinal, but it is commonly referred to byits abbreviation, sinc or we use Sinc throughout this text.

    Definition 0.0. Sinc function.

    Let x R then the Sinc function is defined by the ratio of the sine and the first order monomial by

    Sinc x =sin p x

    p x.

    This function is the basis of all future calculations. A graphical representation of this function is shown in thefollowing Figure. The function is continuous and differentiable for any argument. The maximum value of thefunction is shown at x = 0 where the function value is 1. Note the roots or zeros of the function occur at integervalues of x.

    12 | Lecture_005.nb 2012 Dr. G. Baumann

  • 7/29/2019 Lecture 005

    13/17

    - 4 - 2 0 2 4

    - 0.2

    0.0

    0.2

    0.4

    0.6

    0.8

    1.0

    x

    S i n

    c

    p x

    Figure 0.0. The Sinc function shows decaying oscillations. The zeros of the function occur on integer numbers

    x Z

    .A Sinc approximation uses the shifted Sinc-functions using the property of the zeros

    S k , h x = Sinc x - k hhwhere h = p N is the step length of a discretization and N is the number of Sinc points generated by

    xk = y k h = a + b k h 1 + k h the inverse conformal map of f x = Log x - a b - x .

    The origin of a Sinc approximation is the Cardinal representation of a function given by

    C f x = k =- f xk Sinc f x - k hh .The problem of the Cardinal representation of a function are the upper and lower limits in summation which arenot accessible in numerical calculations. Thus the Cardinal representation is approximated by a finite number of terms in so called Sinc approximations by

    S f x = k =- N N f xk Sinc f x - k hhwhere now the number of finite Sinc points is N .

    Example:

    Given the function f x = cos x find the Sinc approximation of this function on the interval x - 1, 1 .

    Solution:

    We first set the number of Sinc points by

    In[57]:= N1 4

    Out[57]=4

    Then we define the step length by

    2012 Dr. G. Baumann Lecture_005.nb | 13

  • 7/29/2019 Lecture 005

    14/17

    In[58]:=h

    N1

    Out[58]=

    2

    The Sinc points accordingly are generated by

    In[59]:=sincPoints Table Exp k h 1 Exp k h 1 , k, N1, N1 N

    Out[59]=0.996272, 0.982193, 0.917152,0.655794, 0., 0.655794, 0.917152, 0.982193, 0.996272

    The function values at these points follow by inserting the values into the function

    In[60]:=fk Map Cos &, sincPoints

    Out[60]=0.543435, 0.5552, 0.608083, 0.792564, 1., 0.792564, 0.608083, 0.5552, 0.543435

    And the approximation is given by using the conformal map

    In[14]:=x_ : Log

    x 1

    1 x

    via the approximation

    In[61]:=Sf

    k N1

    N1

    fk k N1 1Sin x k h

    h

    x k hh

    Out[61]=0.5 Sin 2 Log 1 x

    1 x

    Log 1 x1 x

    0.271718 Sin 2 2 Log 1 x1 x

    2 Log 1 x1 x

    0.2776 Sin 2 32

    Log 1 x1 x

    3

    2Log 1 x

    1 x

    0.304042 Sin 2 Log 1 x1 x

    Log 1 x1 x

    0.396282 Sin 22

    Log 1 x1 x

    2Log 1 x

    1 x

    0.396282 Sin 22

    Log1 x

    1 x

    2Log 1 x

    1 x

    0.304042 Sin 2 Log1 x

    1 x

    Log 1 x1 x

    0.2776 Sin 2 32

    Log 1 x1 x

    3

    2 Log1 x

    1 x

    0.271718 Sin 2 2 Log 1 x1 x

    2 Log1 x

    1 x

    Comparing graphically the approximation with the function on the interval

    14 | Lecture_005.nb 2012 Dr. G. Baumann

  • 7/29/2019 Lecture 005

    15/17

    In[62]:=Plot Evaluate Cos x , Sf , x, 0.99, 0.99 , AxesLabel "x", "S f "

    Out[62]=

    - 1.0 - 0.5 0.5 1.0x

    0.6

    0.7

    0.8

    0.9

    1.0

    S f

    The local absolute error is shown in the following graph

    In[63]:=LogPlot Evaluate Abs Cos x Sf , x, 0.99, 0.99 ,

    AxesLabel "x", " cos x S f " , PlotRange All

    Out[63]=

    - 0.5 0.0 0.5 1.0x

    10- 10

    10 - 8

    10 - 6

    10 - 4

    0.01

    cos x - S f

    representing a small quantity of order 10 - 3 for a total number of Sinc points m = 9.

    The total error on this interval is

    In[64]:= NIntegrate Cos x Sf 2 , x, 1, 1

    1 2

    Out[64]=0.031113

    which is fairly good. If we increase the number of Sinc points and measure the error as a function of this numberwe get the following table of values

    Out[46]//TableForm=

    2 0.14622754823978108919574908460688298116694932023931568669325883 0.0679161775312396709153197100379458645636892081266372032734279

    2012 Dr. G. Baumann Lecture_005.nb | 15

  • 7/29/2019 Lecture 005

    16/17

    4 0.03110546318742641103898445462143676182587235968444402664185175 0.012888861924907882219620854679700334502106778167877564098029096 0.003312682337221090080626282921968990519991368246902357345573167 0.008398602020467917084993483775379730568492474556802989659839058 0.00273359323115748864429751798895761636240520337173123845848730

    9 0.0069216077902718262359572908872560012504388200295529580359424910 0.0032782290588678120769245896321678308772662587856730880682541111 0.0062067852655897433573527747699122923327862173276883370575028712 0.0036880314768626356680597953772270583502115342687137109071122213 0.0058021332655055260021984400555990708265575784850314814051440214 0.0039596663450256190919639179653364674918191058168912165390353215 0.0055498830304991506435867902838089067931782425720770396210219316 0.0041439445877724753822471867168699968783643308576477179881429617 0.0053817186431631641581081531866444329327691815357699238987020318 0.0042736067247872529123067031696650813260271987469992469532768219 0.0052638757906275434604753470217816961665408851540758816027539220 0.0043679715027849575663815627416164054809724347226733036231370821 0.0051780384408461244187924416234506530447042042448582383971607022 0.0044386680726817929798407278393386933119986433574408577836601123 0.0051135487043038550184611349186821012438741060328463845748671524 0.0044929524335421782900331332803308552812971035638823068546499225 0.0050638534415966239333941937232883287860985312193354157051933626 0.0045355174994283921525804944637223914661121218626892279740585627 0.0050247395693857483373512327896858846420009820430083302802094628 0.0045694990348936632893522954322328338692828759716348455280875929 0.0049933965981638523457459629277658780181069720619422110839146830 0.0045970536107840579938848943053629970950192656660186979107355831 0.0049678904812962520211194379172304911219329900087912261513194932 0.0046197024590336204927777580098891124650084106034645679253289533 0.0049468544623896345449284003455257202366164448968275080287939034 0.0046385428850472732127681795007400865031515852057206026811688335 0.00492930022453725045767243985097124032815847532879205936792314

    36 0.0046543822005911820444970388421328760305779233008431122386187237 0.0049144986062146934148336462770671793484337971971893714664684038 0.0046678248915965142405590122792015390233437199919048540347002339 0.0049019020346894015944892516538752675574975339244390307870621940 0.0046793307414257123972830981659415512897941014671270328842775541 0.0048910927821645681668545089043280163674886934378706441448313942 0.0046892544416974807219379358422624232062182522897932776387596943 0.0048817476582327586889608665374797733068436579879051061797955644 0.0046978731233187749915301595501062597394789507565618012110295745 0.0048736134264500190194310509074621932074559567311842585054550446 0.0047054058355399704072067709154450932933728583149987601954975447 0.0048664893746503829788914731618159593872529748004211830738016548 0.00471202755293457526597976317997747442861048750441704749153011

    49 0.0048602147532882307646776556889185999439985469696086238654259250 0.0047178793973725700092862794206480554465608728487807776731378851 0.0048546595866250144951283887134788570548033420332290864098885152 0.0047230761993167523724927715597319790934521389412389038692388953 0.0048497178594004881153366614827655559046286308264708981921862254 0.0047277121608796841940557179009078224323025822101314206158187555 0.0048453024017628732046130969217258518339726944056583439514788956 0.0047318651460043983446259334613624703432623869378830397827683157 0.0048413410050697353496318678102193346720010735745540681894961858 0.00473559996515297373576841968390070806875805882761760315211866

    16 | Lecture_005.nb 2012 Dr. G. Baumann

  • 7/29/2019 Lecture 005

    17/17

    59 0.0048377734411385215420953485693178486562792318743025251345101460 0.0047389709149416399075397688550681754497764644305370661929102261 0.0048345491523999002196922035286199491544646623797754928177870162 0.0047420237596977811006362413593115516491456244994068989812762663 0.00483162544567602353618036525213911920880601600834792010635302

    64 0.00474479729076108277201400022954875174048407460944316321603551

    we observe that the decay of the error follows nearly an exponential law e ~ -a N 1 2

    (see Figure below).

    In[47]:=ListLogLogPlot errorList, Frame True, FrameLabel "N", " " , PlotRange All

    Out[47]=

    2 5 10 20 50

    0.005

    0.010

    0.020

    0.050

    0.100

    N

    e

    The result is that the error is small for a few numbers of points.

    2012 Dr. G. Baumann Lecture_005.nb | 17