Research Article Approximating the Inverse of a Square...

9
Research Article Approximating the Inverse of a Square Matrix with Application in Computation of the Moore-Penrose Inverse F. Soleymani, 1 M. Sharifi, 1 and S. Shateyi 2 1 Department of Mathematics, Zahedan Branch, Islamic Azad University, Zahedan, Iran 2 Department of Mathematics and Applied Mathematics, School of Mathematical and Natural Sciences, University of Venda, Private Bag X5050, ohoyandou 0950, South Africa Correspondence should be addressed to S. Shateyi; [email protected] Received 3 January 2014; Revised 13 February 2014; Accepted 27 February 2014; Published 31 March 2014 Academic Editor: Juan R. Torregrosa Copyright © 2014 F. Soleymani et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. is paper presents a computational iterative method to find approximate inverses for the inverse of matrices. Analysis of convergence reveals that the method reaches ninth-order convergence. e extension of the proposed iterative method for computing Moore-Penrose inverse is furnished. Numerical results including the comparisons with different existing methods of the same type in the literature will also be presented to manifest the superiority of the new algorithm in finding approximate inverses. 1. Introduction e main purpose of this paper is to present an efficient method in terms of speed of convergence, while its conver- gence can be easily achieved and also be economic for large sparse matrices possessing sparse inverses. We also discuss the extension of the new scheme for finding the Moore- Penrose inverse of singular and rectangular matrices. Finding a matrix inverse is important in some practical applications such as finding a rational approximation for the Fermi-Dirac functions in the density functional theory [1], because it conveys significant features of the problems dealing with. We further mention that in some certain circumstances, the computation of a matrix inverse is necessary. For example, there are many ways to encrypt a message, whereas the use of coding has become particularly significant in recent years. One way to encrypt or code a message is to use matrices and their inverses. Indeed, consider a fixed invertible matrix . Convert the message into a matrix such that is possible to perform. Send the message generated by . At the other end, they will need to know −1 in order to decrypt or decode the message sent. e direct methods such as Gaussian elimination with partial pivoting or LU decomposition require a reasonable time to compute the inverse when the size of the matrices is high. To illustrate further, the Gaussian elimination with partial pivoting method cannot be highly parallelized and this restricts its applicability in some cases. In contrary, Schulz-type methods, which could be applied for large sparse matrices (possessing sparse inverses [2]) by preserving the sparsity feature and can be parallelized, are in focus in such cases. Some known methods were proposed for the approxima- tion of a matrix inverse, such as Schulz scheme. e oldest scheme, that is, the Schulz method (see [3]), is defined as +1 = (2 − ), = 0, 1, 2, . . . , (1) wherein is the identity matrix. Note that [4] mentions that Newton-Schulz iterations can also be combined with wavelets or hierarchical matrices to compute the diagonal elements of −1 independently. e proposed iteration relies on matrix multiplications, which destroy sparsity, and therefore the Schulz-type meth- ods are less efficient for sparse inputs possessing dense inverses. However, a numerical dropping is usually applied to +1 to keep the approximated inverse sparse. Such a strategy is useful for preconditioning. Hindawi Publishing Corporation Journal of Applied Mathematics Volume 2014, Article ID 731562, 8 pages http://dx.doi.org/10.1155/2014/731562

Transcript of Research Article Approximating the Inverse of a Square...

Page 1: Research Article Approximating the Inverse of a Square ...downloads.hindawi.com/journals/jam/2014/731562.pdf · our proposed scheme for generalized outer inverses, that is, (2), ,

Research ArticleApproximating the Inverse of a Square Matrix with Applicationin Computation of the Moore-Penrose Inverse

F Soleymani1 M Sharifi1 and S Shateyi2

1 Department of Mathematics Zahedan Branch Islamic Azad University Zahedan Iran2Department of Mathematics and Applied Mathematics School of Mathematical and Natural SciencesUniversity of Venda Private Bag X5050 Thohoyandou 0950 South Africa

Correspondence should be addressed to S Shateyi stanfordshateyiunivenacza

Received 3 January 2014 Revised 13 February 2014 Accepted 27 February 2014 Published 31 March 2014

Academic Editor Juan R Torregrosa

Copyright copy 2014 F Soleymani et al This is an open access article distributed under the Creative Commons Attribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

This paper presents a computational iterative method to find approximate inverses for the inverse of matrices Analysis ofconvergence reveals that the method reaches ninth-order convergence The extension of the proposed iterative method forcomputingMoore-Penrose inverse is furnished Numerical results including the comparisons with different existingmethods of thesame type in the literature will also be presented to manifest the superiority of the new algorithm in finding approximate inverses

1 Introduction

The main purpose of this paper is to present an efficientmethod in terms of speed of convergence while its conver-gence can be easily achieved and also be economic for largesparse matrices possessing sparse inverses We also discussthe extension of the new scheme for finding the Moore-Penrose inverse of singular and rectangular matrices

Finding a matrix inverse is important in some practicalapplications such as finding a rational approximation for theFermi-Dirac functions in the density functional theory [1]because it conveys significant features of the problems dealingwith

We further mention that in some certain circumstancesthe computation of amatrix inverse is necessary For examplethere are many ways to encrypt a message whereas the useof coding has become particularly significant in recent yearsOne way to encrypt or code a message is to use matrices andtheir inverses Indeed consider a fixed invertible matrix 119860Convert the message into a matrix 119861 such that 119860119861 is possibleto perform Send the message generated by 119860119861 At the otherend they will need to know119860minus1 in order to decrypt or decodethe message sent

The direct methods such as Gaussian elimination withpartial pivoting or LU decomposition require a reasonable

time to compute the inverse when the size of the matricesis high To illustrate further the Gaussian elimination withpartial pivoting method cannot be highly parallelized andthis restricts its applicability in some cases In contrarySchulz-type methods which could be applied for large sparsematrices (possessing sparse inverses [2]) by preserving thesparsity feature and can be parallelized are in focus in suchcases

Some knownmethods were proposed for the approxima-tion of a matrix inverse such as Schulz scheme The oldestscheme that is the Schulz method (see [3]) is defined as

119881119899+1

= 119881119899

(2119868 minus 119860119881119899

) 119899 = 0 1 2 (1)

wherein 119868 is the identity matrix Note that [4] mentions thatNewton-Schulz iterations can also be combinedwithwaveletsor hierarchical matrices to compute the diagonal elements of119860minus1 independentlyThe proposed iteration relies on matrix multiplications

which destroy sparsity and therefore the Schulz-type meth-ods are less efficient for sparse inputs possessing denseinverses However a numerical dropping is usually applied to119881119899+1

to keep the approximated inverse sparse Such a strategyis useful for preconditioning

Hindawi Publishing CorporationJournal of Applied MathematicsVolume 2014 Article ID 731562 8 pageshttpdxdoiorg1011552014731562

2 Journal of Applied Mathematics

To provide more iterations from this classification ofmethods we remember that W Li and Z Li in [5] proposed

119881119899+1

= 119881119899

(3119868 minus 3119860119881119899

+ (119860119881119899

)2

) 119899 = 0 1 2 (2)

In 2011 Li et al in [6] represented (2) in the followingform

119881119899+1

= 119881119899

(3119868 minus 119860119881119899

(3119868 minus 119860119881119899

)) 119899 = 0 1 2 (3)

and also proposed another iterative method for finding 119860minus1as comes next

119881119899+1

= [119868 +1

4(119868 minus 119881

119899

119860) (3119868 minus 119881119899

119860)2

]119881119899

119899 = 0 1 2

(4)

Much of the application of these solvers (specially theSchulz method) for structured matrices was investigated in[7] while the authors in [8] showed that the matrix iterationof Schulz is numerically stable Further discussion about suchiterative schemes might be found at [9ndash12]

The Schulz-type matrix iterations are dependent on theinitial matrix119881

0

toomuch In general we construct the initialguess for square matrices as follows For a general matrix119860119898times119898 satisfying no structure we choose as in [13]

1198810

=119860119879

1198981198601

119860infin

(5)

or 1198810

= 120572119868 where 119868 is the identity matrix and 120572 isin R

should adaptively be determined such that 119868 minus 120572119860 lt 1For diagonally dominant (DD) matrices we choose as in [14]1198810

= diag(111988611

111988622

1119886119898119898

) wherein 119886119894119894

is the diagonalentry of 119860 And for a symmetric positive definite matrix andby using [15] we choose 119881

0

= (1119860119865

)119868 whereas sdot 119865

is theFrobenius norm

For rectangular or singular matrices one may choose1198810

= 119860lowast

(1198601

119860infin

) or 1198810

= 119860119879

(1198601

119860infin

) based on[16] Also we could choose 119881

0

= 120572119860119879 where 0 lt 120572 lt

2120588(119860119879

119860) or

1198810

= 120572119860lowast

(6)

with 0 lt 120572 lt 2120588(119860lowast

119860) [17] Note that these choicescould also be considered for square matrices However themost efficient way for producing 119881

0

(for square nonsingularmatrices) is the hybrid approach presented in Algorithm 1 of[18]

The rest of this paper is organized as follows Themain contributions of this article are given in Sections 2and 3 In Section 2 we analyze a new scheme for matrixinversion while a discussion about the applicability of thescheme for Moore-Penrose inverse will be given in Section 3Section 4 will discuss the performance and the efficiency ofthe new proposed method numerically in comparison withthe other schemes Finally concluding remarks are presentedin Section 5

2 Main Result

The derivation of different Schulz-type methods for matrixinversion relies on iterative (one- or multipoint) methods forthe solution of nonlinear equations [19 20] For instanceimposing Newtonrsquos iteration on the matrix equation 119860119881 = 119868

would result in (1) as fully discussed in [21] Here we applythe following new nonlinear solver on the matrix equation119891(119881) = 119881

minus1

minus 119860 = 0

119884119899

= 119881119899

minus 1198911015840

(119881119899

)minus1

119891 (119881119899

) minus 2minus1

times (1198911015840

(119881119899

)minus1

11989110158401015840

(119881119899

)) (1198911015840

(119881119899

)minus1

119891 (119881119899

))2

119885119899

= 119884119899

minus 2minus1

1198911015840

(119884119899

)minus1

119891 (119884119899

)

119881119899+1

= 119884119899

minus 1198911015840

(119885119899

)minus1

119891 (119884119899

) 119899 = 0 1 2

(7)

Note that (7) is novel and constructed in such a way to pro-duce a new matrix iterative method for finding generalizedinverses efficiently Now the following iteration method formatrix inversion could be obtained

119881119899+1

=1

4119881119899

(3119868 + 119860119881119899

(minus3119868 + 119860119881119899

))

times (4119868 minus (minus119868 + 119860119881119899

)3

times (minus3119868 + 119860119881119899

(3119868 + 119860119881119899

(minus3119868 + 119860119881119899

)))2

)

119899 = 0 1 2

(8)

The obtained scheme includes matrix power which iscertainly of too much cost To remedy this we rewrite theobtained iteration as efficiently as possible to also reduce thenumber of matrix-matrix multiplications in what follows

120595119899

= 119860119881119899

120577119899

= 3119868 + 120595119899

(minus3119868 + 120595119899

)

120592119899

= 120595119899

120577119899

119881119899+1

= minus1

4119881119899

120577119899

(minus13119868 + 120592119899

(15119868 + 120592119899

(minus7119868 + 120592119899

)))

119899 = 0 1 2

(9)

whereas 119868 is the identity matrix and the sequence of matrixiterates 119881

119899

119899=infin

119899=0

converges to 119860minus1 for a good initial guessIt should be remarked that the convergence of any order

for nonsingular square matrices is generated in Section 6 ofChapter 2 of [22] whereas the general way for the rectangularmatrices was discussed in Chapter 5 of [23] and the recentpaper of [24] In those constructions always a convergenceorder 120588will be attained by 120588 times of matrix-matrix productssuch as (1) which reaches the order 2 using twomatrix-matrixmultiplications

Two important matters must be mentioned at thismoment to ease up the perception of why a higher order(efficient) method such as (9) with 7 matrix-matrix products

Journal of Applied Mathematics 3

to reach at least the convergence order 9 is practical First byfollowing the comparative index of informational efficiencyfor inverse finders [25] defined by 119864 = 120588120579 wherein 120588 and120579 stand for the convergence order and the number of matrix-matrix products then the informational index for (9) that is97 asymp 128 beats its other competitors 22 = 1 of (1) 33 = 1of (2)-(3) and 34 = 075 of (4) And second the significanceof the new scheme will be displayed in its implementationTo illustrate further such iterations depend totally on theinitial matrices Though there are certain and efficient waysfor finding 119881

0

in general such initial approximations take ahigh number of iterations (see eg Figure 3 the blue color) toarrive at the convergence phaseOn the other hand each cycleof the implementation of such Schulz-type methods includesone stopping criterion based on the use of amatrix norm andthis would impose further burden and load in general for thelow order methods in contrast to the high order methodssuch as (9) Because the computation of a matrix norm(usually sdot

2

for dense complex matrices and sdot 119865

for largesparse matrices) takes time and therefore higher number ofstepsiterations (which is the result of lower order methods)it will be costlier than the lower number of stepsiterationsof high order methods Hence the higher order (efficient)iterations are mostly better solvers in terms of computationaltime to achieve the desired accuracy

After this complete discussion of the need and efficacy ofthe new scheme (9) we are about to prove the convergencebehavior of (9) theoretically in what follows

Theorem 1 Let119860 = [119886119894119895

]119898times119898 be a nonsingular complex or real

matrix If the initial approximation 1198810

satisfies1003817100381710038171003817119868 minus 1198601198810

1003817100381710038171003817 lt 1 (10)

then the iterativemethod (9) converges with at least ninth orderto 119860minus1

Proof Let (10) hold and for the sake of simplicity assume that1198640

= 119868 minus 1198601198810

and 119864119899

= 119868 minus 119860119881119899

It is straightforward to have

119864119899+1

= 119868 minus 119860119881119899+1

=1

4[31198649

119899

+ 11986410

119899

] (11)

Hence by taking an arbitrary norm from both sides of (11)we obtain

1003817100381710038171003817119864119899+11003817100381710038171003817 le

1

4(31003817100381710038171003817119864119899

1003817100381710038171003817

9

+1003817100381710038171003817119864119899

1003817100381710038171003817

10

) (12)

The rest of the proof is similar to Theorem 21 of [26] andit is hence omitted Finally 119868 minus 119860119881

119899

rarr 0 when 119899 rarr infinand thus 119881

119899

rarr 119860minus1 as 119899 rarr infin So the sequence 119864

119899

2

is strictly monotonic decreasing Moreover if we denote 120598119899

=

119881119899

minus 119860minus1 as the error matrix in the iterative procedure (9)

then we could easily obtain the following error inequality

1003817100381710038171003817120598119899+11003817100381710038171003817 le (

1

4[3119860

8

+ 11986091003817100381710038171003817120598119899

1003817100381710038171003817])1003817100381710038171003817120598119899

1003817100381710038171003817

9

(13)

The error inequality (13) reveals that the iteration (9) con-verges with at least ninth order to 119860

minus1 The proof is nowcomplete

Some features of the new scheme (9) are as follows

(1) The new method can be taken into considerationfor finding approximate inverse (preconditioners) ofcomplex matrices Using a dropping strategy wecould keep the sparsity of a matrix to be usedas a preconditioner Such an action will be usedthroughout this paper for sparse matrices using thecommand Chop[exp tolerance] in Mathematica topreserve the sparsity of the approximate inverses 119881

119894

for any 119894

(2) Unlike the traditional direct solvers such as GEPP thenew scheme has the ability to be parallelized

(3) In order to further reduce the computational burdenof the matrix-by-matrix multiplications per comput-ing step of the new algorithm when dealing withsparse matrices in the new scheme by using SparseArray[] command in Mathematica we will preservethe sparsity features of the output approximate inversein a reasonable computational time Additionallyfor some special matrices such as Toeplitz or Van-dermonde types of matrices further computationalsavings are possible as discussed by Pan in [27] Seefor more information [28]

(4) If the initial guess commutes with the original matrixthen all matrices in the sequence 119881

119899

119899=infin

119899=0

satisfythe same property of commutation Notice that onlysome initial matrices guarantee that in all cases thesequence commutes with the matrix 119860

In the next section we present some analytical discussionabout the fact that the new method (9) could also be used forcomputing the Moore-Penrose generalized inverse

3 An Iterative Method forMoore-Penrose Inverse

It is well known in the literature that the Moore-Penroseinverse of a complex matrix 119860 isin C119898times119896 (also called pseudo-inverse) denoted by 119860dagger isin C119896times119898 is a matrix 119881 isin C119896times119898

satisfying the following conditions

119860119881119860 = 119860 119881119860119881 = 119881

(119860119881)lowast

= 119860119881 (119881119860)lowast

= 119881119860

(14)

wherein 119860lowast is the conjugate transpose of 119860 This inverse

uniquely existsVarious numerical solution methods have been devel-

oped for approximating Moore-Penrose inverse (see eg[29]) The most comprehensive review of such iterationscould be found in [17] while the authors mentioned thatiterative Schulz-type methods can be taken into account forfinding pseudoinverse For example it is known that (1)converges to the pseudoinverse in the general case if 119881

0

=

120572119860lowast where 0 lt 120572 lt 2120588(119860

lowast

119860) and 120588(sdot) denotes the spectralradius

Accordingly it is known that the new scheme (9) con-verges to the Moore-Penrose inverse In order validate this

4 Journal of Applied Mathematics

n = 30000

A = SparseArray[Band[195 10000] -gt minusI Band[1 1] -gt 19

Band[1000 2500] -gt 21 Band[minus60 minus1800] -gt 11

Band[minus600 170] -gt 2 + I Band[minus1350 250] -gt minus53

n n 0]

Algorithm 1

1

10000

20000

30000

1

10000

20000

30000

1 10000 20000 30000

1 10000 20000 30000

(a)

1

10000

20000

30000

1

10000

20000

30000

1 10000 20000 30000

1 10000 20000 30000

(b)

Figure 1 The matrix plot (a) and the inverse (b) of the large complex sparse 30000 times 30000matrix 119860 in Example 7

fact analytically we provide some important theoretics inwhat follows

Lemma 2 For the sequence 119881119899

119899=infin

119899=0

generated by the iterativeSchulz-type method (9) and the initial matrix (6) for any 119899 ge0 it holds that

(119860119881119899

)lowast

= 119860119881119899

(119881119899

119860)lowast

= 119881119899

119860

119881119899

119860119860dagger

= 119881119899

119860dagger

119860119881119899

= 119881119899

(15)

Proof The proof of this lemma is based on mathematicalinduction Such a procedure is similar to Lemma 31 of [30]and it is hence omitted

Theorem 3 For the rectangular complex matrix 119860 isin C119898times119896

and the sequence 119881119899

119899=infin

119899=0

generated by (9) for any 119899 ge 0using the initial approximation (6) the sequence converges tothe pseudoinverse 119860dagger with ninth order of convergence

Proof We consider E119899

= 119881119899

minus 119860dagger as the error matrix for

finding theMoore-Penrose inverseThen the proof is similarto the proof of Theorem 4 in [18] and it is hence omitted Wefinally have

1003817100381710038171003817E119899+11003817100381710038171003817 le

10038171003817100381710038171003817119860dagger

1003817100381710038171003817100381711986091003817100381710038171003817E119899

1003817100381710038171003817

9

(16)

Thus 119881119899

minus 119860dagger

rarr 0 that is the obtained sequence of (9)converges to the Moore-Penrose inverse as 119899 rarr +infin Thisends the proof

Theorem 4 Considering the same assumptions as inTheorem 3 then the iterative method (9) has asymptoticalstability for finding the Moore-Penrose generalized inverse

Proof The steps of proving the asymptotical stability of (9)are similar to those which have recently been taken fora general family of methods in [31] Hence the proof isomitted

Remark 5 It should be remarked that the generalization ofour proposed scheme for generalized outer inverses that is119860(2)

119879119878

is straight forward according to the recent work [32]

Remark 6 The new iteration (9) is free frommatrix power inits implementation and this allows one to apply it for findinggeneralized inverses easily

4 Computational Experiments

Using the computer algebra system Mathematica 8 [33] wenow compare our iterative algorithm with other Schulz-typemethods

Journal of Applied Mathematics 5

Table 1 Results of comparisons for Example 7

Methods(1) (3) (4) (9)

Convergence rate 2 3 3 7Iteration number 3 2 2 1Residual norm 832717 times 10

minus7

121303 times 10minus7

510014 times 10minus8

97105 times 10minus8

No of nonzero ele 591107 720849 800689 762847Time (in seconds) 422 405 932 418

Id = SparseArray[i i -gt 1 n n]

V = DiagonalMatrixSparseArray[1Normal[Diagonal[A]]]

Do[V1 = SparseArray[V]

V2 = Chop[AV1] V3 = 3 Id + V2(minus3 Id + V2)

V4 = SparseArray[V2V3]

V = Chop[minus(14) V1V3SparseArray[minus13 Id

+ V4(15 Id + V4(minus7 Id + V4))]]

Print[V]L[i]= N[Norm[Id minus VA 1]]

Print[The residual norm is

Column[i Frame -gt All FrameStyle -gt Directive[Blue]]

Column[L[i] Frame -gt All FrameStyle -gt Directive[Blue]]]

i 1] AbsoluteTiming

Algorithm 2

n = 10000

A = SparseArray[Band[minus700 minus200] -gt 1 Band[1 1] -gt minus15

Band[1 minus400] -gt 9 Band[1 minus400] -gt minus1

Band[2000 200] -gt 1 n n 0]

Algorithm 3

1

2000

4000

6000

8000

10000

1

2000

4000

6000

8000

10000

1 2000 4000 6000 8000 10000

1 2000 4000 6000 8000 10000

(a)

1

2000

4000

6000

8000

10000

1

2000

4000

6000

8000

10000

1 2000 4000 6000 8000 10000

1 2000 4000 6000 8000 10000

(b)

Figure 2 The matrix plot (a) and the inverse (b) of the large sparse 10000 times 10000matrix 119860 in Example 8

6 Journal of Applied Mathematics

Table 2 Results of comparisons for Example 8

Methods(1) (3) (4) (9)

Convergence rate 2 3 3 7Iteration number 10 7 6 3Residual norm 129011 times 10

minus10

163299 times 10minus9

118619 times 10minus11

768192 times 10minus10

No of nonzero ele 41635 42340 41635 41635Time (in seconds) 223 215 226 195

Table 3 Results of elapsed computational time for solving Example 9

Methods GMRES PGMRES-(1) PGMRES-(3) PGMRES-(4) PGMRES-(9)The total time 029 014 014 009 007

For numerical comparisons in this section we have usedthe methods (1) (3) (4) and (9) We implement the abovealgorithms on the following examples using the built-inprecision in Mathematica 8

Example 7 Let us consider the large complex sparse matrixof the size 30000 times 30000 with 79512 nonzero elementspossessing a sparse inverse as in Algorithm 1 (119868 = radicminus1)where the matrix plot of 119860 showing its structure has beendrawn in Figure 1(a)

Methods like (9) are powerful in finding an approximateinverse or in finding robust approximate inverse precondi-tioners in low number of steps in which the output form ofthe approximate inverse is also sparse

In this test problem the stopping criterion is 119868 minus 119881119899

1198601

le

10minus7 Table 1 shows the number of iterations for different

methods As the programs were running we found therunning time using the command Absolute Timing[] toreport the elapsedCPU time (in seconds) for this experimentNote that the initial guess has been constructed using 119881

0

=

diag(111988611

111988622

1119886119898119898

) in Example 7 The plot of theapproximate inverse obtained by applying (9) has beenportrayed in Figure 1(b) The reason which made (4) theworsemethod is in the fact of computingmatrix power whichis time consuming for sparse matrices In the tables of thispaper ldquoNo of nonzero elerdquo stands for the number of nonzeroelements

In Algorithm 2 we provide the written Mathematica 8code of the new scheme (9) in this example to clearly revealsthe simplicity of the new scheme in finding approximateinverses using a threshold Chop[exp 10minus10]

Example 8 Let us consider a large sparse 10000 times 10000

matrix (with 18601 nonzero elements) as shown inAlgorithm 3

Table 2 shows the number of iterations for differentmeth-ods in order to reveal the efficiency of the proposed iterationwith a special attention to the elapsed timeThematrix plot of119860 in this case showing its structure alongside its approximateinverse has been drawn in Figure 2 respectively Note that

in Example 8 we have used an initial value constructed by1198810

= 119860lowast

(1198601

119860infin

) The stopping criterion is same as theprevious example

The new method (9) is totally better in terms of thenumber of iterations and the computational time In thispaper the computer specifications are Microsoft WindowsXP Intel(R) Pentium(R) 4 CPU 320GHz and 4GBof RAM

Matrices used up to now are large and sparse enoughto clarify the applicability of the new scheme Anyhow forsome classes of problems there are collections of matricesthat are used when proving new iterative linear systemsolvers or new preconditioners such as Matrix MarkethttpmathnistgovMatrixMarket In the following test wecompare the computational time of solving a practical prob-lem using the above source

Example 9 Consider the linear system of 119860119909 = 119887 in which119860 is defined by A = Example Data [ldquoMatrixrdquo ldquoBaipde900rdquo]and the right hand side vector is b = Table [1 i 1 900] InTable 3 we compare the consuming time (in seconds) of solv-ing this system by GMRES solver and its left preconditionedsystem 119881

1

119860119909 = 1198811

119887 In Table 3 for example PGMRES-(9) shows that the linear system 119881

1

119860119909 = 1198811

119887 in which 1198811

has been calculated by (9) is solved by GMRES solver Thetolerance is the residual norm to be less than 10minus14 in this testThe results support fully the efficiency of the new scheme (9)in constructing robust preconditioners

Example 10 This experiment evaluates the applicability of thenewmethod for findingMoore-Penrose inverse of 20 randomsparse matrices (possessing sparse pseudo-inverses)119898 times 119896 =

1200 times 1500 as shown in Algorithm 4

Note that the identity matrix should then be an119898 times 119898 matrix and the approximate pseudoinverseswould be of the size 119896 times 119898 In this example the initialapproximate Moore-Penrose inverse is constructedby V0

= ConjugateTranspose [A[j]] lowast (1((Singular

ValueList[A[j] 1] [[1]])2)) for each random test matrixAnd the stopping criterion is ||Vn+1 minus Vn||1 le 10

minus8 We onlycompared methods (1) denoted by ldquoSchulzrdquo (3) denoted byldquoKMSrdquo and the new iterative scheme (9) denoted by ldquoPMrdquo

Journal of Applied Mathematics 7

m = 1200 k = 1500 number = 20

Table[A[l] = SparseArray[Band[400 1 m k] -gt Random[] minus I

Band[1 200 m k] -gt 11 -Random[]

Band[minus95 100] -gt minus002 Band[minus100 500] -gt 01

m k 0] l number]

Algorithm 4

5 10 15 20

50

40

30

20

10

0

Matrices

Num

ber o

f ite

ratio

ns

Schulz

PMKSM

Figure 3The results of comparisons for Example 10 in terms of thenumber of iterations

The results for the number of iterations and the runningtime are compared in Figures 3-4 They show a clear advan-tage of the new scheme in finding theMoore-Penrose inverse

5 Conclusions

In this paper we have developed an iterative method ininverse finding of matrices which has the capability to beapplied on real or complex matrices with preserving thesparsity feature in matrix-by-matrix multiplications using athreshold This allows us to interestingly reduce the com-putational time and usage memory in applying the newiterative method We have shown that the suggested method(9) reaches ninth order of convergence The extension ofthe scheme for pseudoinverse has also been discussed in thepaper The applicability of the new scheme was illustratednumerically in Section 4

Finally according to the numerical results obtained andthe concrete application of such solvers we can conclude thatthe new method is efficient

The new scheme (9) has been tested for matrix inversionHowever we believe that such iterations could also be appliedfor finding the inverse of an integer in modulo 119901

119899 [34]Following this idea will for sure open a new topic of researchwhich is a combination of Numerical Analysis and NumberTheory We will focus on this trend of research for futurestudies

5 10 15 20

30

25

20

15

10

05

00

MatricesC

ompu

tatio

nal t

ime (

s)

Schulz

PMKSM

Figure 4The results of comparisons for Example 10 in terms of theelapsed time

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgment

The authors thank the referees for their valuable commentsand for the suggestions to improve the readability of thepaper

References

[1] R B Sidje and Y Saad ldquoRational approximation to the Fermi-Dirac function with applications in density functional theoryrdquoNumerical Algorithms vol 56 no 3 pp 455ndash479 2011

[2] F Soleymani and P S Stanimirovic ldquoA higher order iterativemethod for computing the Drazin inverserdquoThe Scientific WorldJournal vol 2013 Article ID 708647 11 pages 2013

[3] G Schulz ldquoIterative Berechnung der Reziproken matrixrdquoZeitschrift fur angewandte Mathematik und Mechanik vol 13pp 57ndash59 1933

[4] L Lin J Lu R Car and EWeinan ldquoMultipole representation ofthe Fermi operator with application to the electronic structureanalysis of metallic systemsrdquo Physical Review BmdashCondensedMatter and Materials Physics vol 79 no 11 Article ID 1151332009

8 Journal of Applied Mathematics

[5] W Li and Z Li ldquoA family of iterative methods for computingthe approximate inverse of a square matrix and inner inverse ofa non-square matrixrdquo Applied Mathematics and Computationvol 215 no 9 pp 3433ndash3442 2010

[6] H-B Li T-Z Huang Y Zhang X-P Liu and T-X GuldquoChebyshev-type methods and preconditioning techniquesrdquoApplied Mathematics and Computation vol 218 no 2 pp 260ndash270 2011

[7] W Hackbusch B N Khoromskij and E E TyrtyshnikovldquoApproximate iterations for structured matricesrdquo NumerischeMathematik vol 109 no 3 pp 365ndash383 2008

[8] T Soderstrom and GW Stewart ldquoOn the numerical propertiesof an iterative method for computing the Moore-Penrosegeneralized inverserdquo SIAM Journal on Numerical Analysis vol11 pp 61ndash74 1974

[9] X Liu and Y Qin ldquoSuccessive matrix squaring algorithm forcomputing the generalized inverse 119860

(2)

119879119878

rdquo Journal of AppliedMathematics vol 2012 Article ID 262034 12 pages 2012

[10] X Liu and S Huang ldquoProper splitting for the generalizedinverse119860(2)

119879119878

and its application on Banach spacesrdquoAbstract andApplied Analysis vol 2012 Article ID 736929 9 pages 2012

[11] X Liu andFHuang ldquoHigher-order convergent iterativemethodfor computing the generalized inverse over Banach spacesrdquoAbstract and Applied Analysis vol 2013 Article ID 356105 5pages 2013

[12] X Liu H Jin and Y Yu ldquoHigher-order convergent iterativemethod for computing the generalized inverse and its applica-tion to Toeplitz matricesrdquo Linear Algebra and Its Applicationsvol 439 no 6 pp 1635ndash1650 2013

[13] J Rajagopalan An iterative algortihm for inversion of matrices[MS thesis] Concordia University Quebec Canada 1996

[14] L Grosz ldquoPreconditioning by incomplete block eliminationrdquoNumerical Linear Algebra with Applications vol 7 no 7-8 pp527ndash541 2000

[15] G Codevico V Y Pan and M Van Barel ldquoNewton-likeiteration based on a cubic polynomial for structured matricesrdquoNumerical Algorithms vol 36 no 4 pp 365ndash380 2004

[16] V Pan and R Schreiber ldquoAn improved Newton iteration forthe generalized inverse of a matrix with applicationsrdquo SIAMJournal on Scientific and Statistical Computing vol 12 no 5 pp1109ndash1130 1991

[17] A Ben-Israel and T N E Greville Generalized InversesSpringer 2nd edition 2003

[18] F Khaksar Haghani and F Soleymani ldquoA new high-order stablenumerical method for matrix inversionrdquo The Scientific WorldJournal vol 2014 Article ID 830564 10 pages 2014

[19] F Soleymani S Karimi Vanani H I Siyyam and I AAl-Subaihi ldquoNumerical solution of nonlinear equations byan optimal eighth-order class of iterative methodsrdquo AnnalidellrsquoUniversita di Ferrara Sezione VII ScienzeMatematiche vol59 no 1 pp 159ndash171 2013

[20] X Wang and T Zhang ldquoHigh-order Newton-type iterativemethods with memory for solving nonlinear equationsrdquoMath-ematical Communications vol 19 pp 91ndash109 2014

[21] H Hotelling ldquoAnalysis of a complex of statistical variables intoprincipal componentsrdquo Journal of Educational Psychology vol24 no 6 pp 417ndash441 1933

[22] E Isaacson and H B Keller Analysis of Numerical MethodsJohn Wiley amp Sons New York NY USA 1966

[23] E V Krishnamurthy and S K Sen Numerical AlgorithmsmdashComputations in Science and Engineering Affiliated East-WestPress New Delhi India 2007

[24] L Weiguo L Juan and Q Tiantian ldquoA family of iterativemethods for computing Moore-Penrose inverse of a matrixrdquoLinear Algebra and Its Applications vol 438 no 1 pp 47ndash562013

[25] F Soleymani ldquoA fast convergent iterative solver for approximateinverse of matricesrdquo Numerical Linear Algebra with Applica-tions 2013

[26] M Zaka Ullah F Soleymani and A S Al-Fhaid ldquoAn effi-cient matrix iteration for computing weighted Moore-Penroseinverserdquo Applied Mathematics and Computation vol 226 pp441ndash454 2014

[27] VY Pan StructuredMatrices andPolynomials Unified SuperfastAlgorithms Springer Birkhaauser Boston Mass USA 2001

[28] M Miladinovic S Miljkovic and P Stanimirovic ldquoModifiedSMSmethod for computing outer inverses of ToeplitzmatricesrdquoApplied Mathematics and Computation vol 218 no 7 pp 3131ndash3143 2011

[29] H Chen and Y Wang ldquoA family of higher-order convergentiterative methods for computing the Moore-Penrose inverserdquoAppliedMathematics and Computation vol 218 no 8 pp 4012ndash4016 2011

[30] A R Soheili F Soleymani and M D Petkovic ldquoOn the com-putation of weightedMoore-Penrose inverse using a high-ordermatrix methodrdquo Computers amp Mathematics with Applicationsvol 66 no 11 pp 2344ndash2351 2013

[31] F Soleymani and P S Stanimirovic ldquoA note on the stability ofa 119901th order iteration for finding generalized inversesrdquo AppliedMathematics Letters vol 28 pp 77ndash81 2014

[32] P S Stanimirovic and F Soleymani ldquoA class of numerical algo-rithms for computing outer inversesrdquo Journal of Computationaland Applied Mathematics vol 263 pp 236ndash245 2014

[33] S Wolfram The Mathematica Book Wolfram Media 5th edi-tion 2003

[34] M P Knapp and C Xenophontos ldquoNumerical analysis meetsnumber theory using rootfindingmethods to calculate inversesmod 119901119899rdquo Applicable Analysis and Discrete Mathematics vol 4no 1 pp 23ndash31 2010

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 2: Research Article Approximating the Inverse of a Square ...downloads.hindawi.com/journals/jam/2014/731562.pdf · our proposed scheme for generalized outer inverses, that is, (2), ,

2 Journal of Applied Mathematics

To provide more iterations from this classification ofmethods we remember that W Li and Z Li in [5] proposed

119881119899+1

= 119881119899

(3119868 minus 3119860119881119899

+ (119860119881119899

)2

) 119899 = 0 1 2 (2)

In 2011 Li et al in [6] represented (2) in the followingform

119881119899+1

= 119881119899

(3119868 minus 119860119881119899

(3119868 minus 119860119881119899

)) 119899 = 0 1 2 (3)

and also proposed another iterative method for finding 119860minus1as comes next

119881119899+1

= [119868 +1

4(119868 minus 119881

119899

119860) (3119868 minus 119881119899

119860)2

]119881119899

119899 = 0 1 2

(4)

Much of the application of these solvers (specially theSchulz method) for structured matrices was investigated in[7] while the authors in [8] showed that the matrix iterationof Schulz is numerically stable Further discussion about suchiterative schemes might be found at [9ndash12]

The Schulz-type matrix iterations are dependent on theinitial matrix119881

0

toomuch In general we construct the initialguess for square matrices as follows For a general matrix119860119898times119898 satisfying no structure we choose as in [13]

1198810

=119860119879

1198981198601

119860infin

(5)

or 1198810

= 120572119868 where 119868 is the identity matrix and 120572 isin R

should adaptively be determined such that 119868 minus 120572119860 lt 1For diagonally dominant (DD) matrices we choose as in [14]1198810

= diag(111988611

111988622

1119886119898119898

) wherein 119886119894119894

is the diagonalentry of 119860 And for a symmetric positive definite matrix andby using [15] we choose 119881

0

= (1119860119865

)119868 whereas sdot 119865

is theFrobenius norm

For rectangular or singular matrices one may choose1198810

= 119860lowast

(1198601

119860infin

) or 1198810

= 119860119879

(1198601

119860infin

) based on[16] Also we could choose 119881

0

= 120572119860119879 where 0 lt 120572 lt

2120588(119860119879

119860) or

1198810

= 120572119860lowast

(6)

with 0 lt 120572 lt 2120588(119860lowast

119860) [17] Note that these choicescould also be considered for square matrices However themost efficient way for producing 119881

0

(for square nonsingularmatrices) is the hybrid approach presented in Algorithm 1 of[18]

The rest of this paper is organized as follows Themain contributions of this article are given in Sections 2and 3 In Section 2 we analyze a new scheme for matrixinversion while a discussion about the applicability of thescheme for Moore-Penrose inverse will be given in Section 3Section 4 will discuss the performance and the efficiency ofthe new proposed method numerically in comparison withthe other schemes Finally concluding remarks are presentedin Section 5

2 Main Result

The derivation of different Schulz-type methods for matrixinversion relies on iterative (one- or multipoint) methods forthe solution of nonlinear equations [19 20] For instanceimposing Newtonrsquos iteration on the matrix equation 119860119881 = 119868

would result in (1) as fully discussed in [21] Here we applythe following new nonlinear solver on the matrix equation119891(119881) = 119881

minus1

minus 119860 = 0

119884119899

= 119881119899

minus 1198911015840

(119881119899

)minus1

119891 (119881119899

) minus 2minus1

times (1198911015840

(119881119899

)minus1

11989110158401015840

(119881119899

)) (1198911015840

(119881119899

)minus1

119891 (119881119899

))2

119885119899

= 119884119899

minus 2minus1

1198911015840

(119884119899

)minus1

119891 (119884119899

)

119881119899+1

= 119884119899

minus 1198911015840

(119885119899

)minus1

119891 (119884119899

) 119899 = 0 1 2

(7)

Note that (7) is novel and constructed in such a way to pro-duce a new matrix iterative method for finding generalizedinverses efficiently Now the following iteration method formatrix inversion could be obtained

119881119899+1

=1

4119881119899

(3119868 + 119860119881119899

(minus3119868 + 119860119881119899

))

times (4119868 minus (minus119868 + 119860119881119899

)3

times (minus3119868 + 119860119881119899

(3119868 + 119860119881119899

(minus3119868 + 119860119881119899

)))2

)

119899 = 0 1 2

(8)

The obtained scheme includes matrix power which iscertainly of too much cost To remedy this we rewrite theobtained iteration as efficiently as possible to also reduce thenumber of matrix-matrix multiplications in what follows

120595119899

= 119860119881119899

120577119899

= 3119868 + 120595119899

(minus3119868 + 120595119899

)

120592119899

= 120595119899

120577119899

119881119899+1

= minus1

4119881119899

120577119899

(minus13119868 + 120592119899

(15119868 + 120592119899

(minus7119868 + 120592119899

)))

119899 = 0 1 2

(9)

whereas 119868 is the identity matrix and the sequence of matrixiterates 119881

119899

119899=infin

119899=0

converges to 119860minus1 for a good initial guessIt should be remarked that the convergence of any order

for nonsingular square matrices is generated in Section 6 ofChapter 2 of [22] whereas the general way for the rectangularmatrices was discussed in Chapter 5 of [23] and the recentpaper of [24] In those constructions always a convergenceorder 120588will be attained by 120588 times of matrix-matrix productssuch as (1) which reaches the order 2 using twomatrix-matrixmultiplications

Two important matters must be mentioned at thismoment to ease up the perception of why a higher order(efficient) method such as (9) with 7 matrix-matrix products

Journal of Applied Mathematics 3

to reach at least the convergence order 9 is practical First byfollowing the comparative index of informational efficiencyfor inverse finders [25] defined by 119864 = 120588120579 wherein 120588 and120579 stand for the convergence order and the number of matrix-matrix products then the informational index for (9) that is97 asymp 128 beats its other competitors 22 = 1 of (1) 33 = 1of (2)-(3) and 34 = 075 of (4) And second the significanceof the new scheme will be displayed in its implementationTo illustrate further such iterations depend totally on theinitial matrices Though there are certain and efficient waysfor finding 119881

0

in general such initial approximations take ahigh number of iterations (see eg Figure 3 the blue color) toarrive at the convergence phaseOn the other hand each cycleof the implementation of such Schulz-type methods includesone stopping criterion based on the use of amatrix norm andthis would impose further burden and load in general for thelow order methods in contrast to the high order methodssuch as (9) Because the computation of a matrix norm(usually sdot

2

for dense complex matrices and sdot 119865

for largesparse matrices) takes time and therefore higher number ofstepsiterations (which is the result of lower order methods)it will be costlier than the lower number of stepsiterationsof high order methods Hence the higher order (efficient)iterations are mostly better solvers in terms of computationaltime to achieve the desired accuracy

After this complete discussion of the need and efficacy ofthe new scheme (9) we are about to prove the convergencebehavior of (9) theoretically in what follows

Theorem 1 Let119860 = [119886119894119895

]119898times119898 be a nonsingular complex or real

matrix If the initial approximation 1198810

satisfies1003817100381710038171003817119868 minus 1198601198810

1003817100381710038171003817 lt 1 (10)

then the iterativemethod (9) converges with at least ninth orderto 119860minus1

Proof Let (10) hold and for the sake of simplicity assume that1198640

= 119868 minus 1198601198810

and 119864119899

= 119868 minus 119860119881119899

It is straightforward to have

119864119899+1

= 119868 minus 119860119881119899+1

=1

4[31198649

119899

+ 11986410

119899

] (11)

Hence by taking an arbitrary norm from both sides of (11)we obtain

1003817100381710038171003817119864119899+11003817100381710038171003817 le

1

4(31003817100381710038171003817119864119899

1003817100381710038171003817

9

+1003817100381710038171003817119864119899

1003817100381710038171003817

10

) (12)

The rest of the proof is similar to Theorem 21 of [26] andit is hence omitted Finally 119868 minus 119860119881

119899

rarr 0 when 119899 rarr infinand thus 119881

119899

rarr 119860minus1 as 119899 rarr infin So the sequence 119864

119899

2

is strictly monotonic decreasing Moreover if we denote 120598119899

=

119881119899

minus 119860minus1 as the error matrix in the iterative procedure (9)

then we could easily obtain the following error inequality

1003817100381710038171003817120598119899+11003817100381710038171003817 le (

1

4[3119860

8

+ 11986091003817100381710038171003817120598119899

1003817100381710038171003817])1003817100381710038171003817120598119899

1003817100381710038171003817

9

(13)

The error inequality (13) reveals that the iteration (9) con-verges with at least ninth order to 119860

minus1 The proof is nowcomplete

Some features of the new scheme (9) are as follows

(1) The new method can be taken into considerationfor finding approximate inverse (preconditioners) ofcomplex matrices Using a dropping strategy wecould keep the sparsity of a matrix to be usedas a preconditioner Such an action will be usedthroughout this paper for sparse matrices using thecommand Chop[exp tolerance] in Mathematica topreserve the sparsity of the approximate inverses 119881

119894

for any 119894

(2) Unlike the traditional direct solvers such as GEPP thenew scheme has the ability to be parallelized

(3) In order to further reduce the computational burdenof the matrix-by-matrix multiplications per comput-ing step of the new algorithm when dealing withsparse matrices in the new scheme by using SparseArray[] command in Mathematica we will preservethe sparsity features of the output approximate inversein a reasonable computational time Additionallyfor some special matrices such as Toeplitz or Van-dermonde types of matrices further computationalsavings are possible as discussed by Pan in [27] Seefor more information [28]

(4) If the initial guess commutes with the original matrixthen all matrices in the sequence 119881

119899

119899=infin

119899=0

satisfythe same property of commutation Notice that onlysome initial matrices guarantee that in all cases thesequence commutes with the matrix 119860

In the next section we present some analytical discussionabout the fact that the new method (9) could also be used forcomputing the Moore-Penrose generalized inverse

3 An Iterative Method forMoore-Penrose Inverse

It is well known in the literature that the Moore-Penroseinverse of a complex matrix 119860 isin C119898times119896 (also called pseudo-inverse) denoted by 119860dagger isin C119896times119898 is a matrix 119881 isin C119896times119898

satisfying the following conditions

119860119881119860 = 119860 119881119860119881 = 119881

(119860119881)lowast

= 119860119881 (119881119860)lowast

= 119881119860

(14)

wherein 119860lowast is the conjugate transpose of 119860 This inverse

uniquely existsVarious numerical solution methods have been devel-

oped for approximating Moore-Penrose inverse (see eg[29]) The most comprehensive review of such iterationscould be found in [17] while the authors mentioned thatiterative Schulz-type methods can be taken into account forfinding pseudoinverse For example it is known that (1)converges to the pseudoinverse in the general case if 119881

0

=

120572119860lowast where 0 lt 120572 lt 2120588(119860

lowast

119860) and 120588(sdot) denotes the spectralradius

Accordingly it is known that the new scheme (9) con-verges to the Moore-Penrose inverse In order validate this

4 Journal of Applied Mathematics

n = 30000

A = SparseArray[Band[195 10000] -gt minusI Band[1 1] -gt 19

Band[1000 2500] -gt 21 Band[minus60 minus1800] -gt 11

Band[minus600 170] -gt 2 + I Band[minus1350 250] -gt minus53

n n 0]

Algorithm 1

1

10000

20000

30000

1

10000

20000

30000

1 10000 20000 30000

1 10000 20000 30000

(a)

1

10000

20000

30000

1

10000

20000

30000

1 10000 20000 30000

1 10000 20000 30000

(b)

Figure 1 The matrix plot (a) and the inverse (b) of the large complex sparse 30000 times 30000matrix 119860 in Example 7

fact analytically we provide some important theoretics inwhat follows

Lemma 2 For the sequence 119881119899

119899=infin

119899=0

generated by the iterativeSchulz-type method (9) and the initial matrix (6) for any 119899 ge0 it holds that

(119860119881119899

)lowast

= 119860119881119899

(119881119899

119860)lowast

= 119881119899

119860

119881119899

119860119860dagger

= 119881119899

119860dagger

119860119881119899

= 119881119899

(15)

Proof The proof of this lemma is based on mathematicalinduction Such a procedure is similar to Lemma 31 of [30]and it is hence omitted

Theorem 3 For the rectangular complex matrix 119860 isin C119898times119896

and the sequence 119881119899

119899=infin

119899=0

generated by (9) for any 119899 ge 0using the initial approximation (6) the sequence converges tothe pseudoinverse 119860dagger with ninth order of convergence

Proof We consider E119899

= 119881119899

minus 119860dagger as the error matrix for

finding theMoore-Penrose inverseThen the proof is similarto the proof of Theorem 4 in [18] and it is hence omitted Wefinally have

1003817100381710038171003817E119899+11003817100381710038171003817 le

10038171003817100381710038171003817119860dagger

1003817100381710038171003817100381711986091003817100381710038171003817E119899

1003817100381710038171003817

9

(16)

Thus 119881119899

minus 119860dagger

rarr 0 that is the obtained sequence of (9)converges to the Moore-Penrose inverse as 119899 rarr +infin Thisends the proof

Theorem 4 Considering the same assumptions as inTheorem 3 then the iterative method (9) has asymptoticalstability for finding the Moore-Penrose generalized inverse

Proof The steps of proving the asymptotical stability of (9)are similar to those which have recently been taken fora general family of methods in [31] Hence the proof isomitted

Remark 5 It should be remarked that the generalization ofour proposed scheme for generalized outer inverses that is119860(2)

119879119878

is straight forward according to the recent work [32]

Remark 6 The new iteration (9) is free frommatrix power inits implementation and this allows one to apply it for findinggeneralized inverses easily

4 Computational Experiments

Using the computer algebra system Mathematica 8 [33] wenow compare our iterative algorithm with other Schulz-typemethods

Journal of Applied Mathematics 5

Table 1 Results of comparisons for Example 7

Methods(1) (3) (4) (9)

Convergence rate 2 3 3 7Iteration number 3 2 2 1Residual norm 832717 times 10

minus7

121303 times 10minus7

510014 times 10minus8

97105 times 10minus8

No of nonzero ele 591107 720849 800689 762847Time (in seconds) 422 405 932 418

Id = SparseArray[i i -gt 1 n n]

V = DiagonalMatrixSparseArray[1Normal[Diagonal[A]]]

Do[V1 = SparseArray[V]

V2 = Chop[AV1] V3 = 3 Id + V2(minus3 Id + V2)

V4 = SparseArray[V2V3]

V = Chop[minus(14) V1V3SparseArray[minus13 Id

+ V4(15 Id + V4(minus7 Id + V4))]]

Print[V]L[i]= N[Norm[Id minus VA 1]]

Print[The residual norm is

Column[i Frame -gt All FrameStyle -gt Directive[Blue]]

Column[L[i] Frame -gt All FrameStyle -gt Directive[Blue]]]

i 1] AbsoluteTiming

Algorithm 2

n = 10000

A = SparseArray[Band[minus700 minus200] -gt 1 Band[1 1] -gt minus15

Band[1 minus400] -gt 9 Band[1 minus400] -gt minus1

Band[2000 200] -gt 1 n n 0]

Algorithm 3

1

2000

4000

6000

8000

10000

1

2000

4000

6000

8000

10000

1 2000 4000 6000 8000 10000

1 2000 4000 6000 8000 10000

(a)

1

2000

4000

6000

8000

10000

1

2000

4000

6000

8000

10000

1 2000 4000 6000 8000 10000

1 2000 4000 6000 8000 10000

(b)

Figure 2 The matrix plot (a) and the inverse (b) of the large sparse 10000 times 10000matrix 119860 in Example 8

6 Journal of Applied Mathematics

Table 2 Results of comparisons for Example 8

Methods(1) (3) (4) (9)

Convergence rate 2 3 3 7Iteration number 10 7 6 3Residual norm 129011 times 10

minus10

163299 times 10minus9

118619 times 10minus11

768192 times 10minus10

No of nonzero ele 41635 42340 41635 41635Time (in seconds) 223 215 226 195

Table 3 Results of elapsed computational time for solving Example 9

Methods GMRES PGMRES-(1) PGMRES-(3) PGMRES-(4) PGMRES-(9)The total time 029 014 014 009 007

For numerical comparisons in this section we have usedthe methods (1) (3) (4) and (9) We implement the abovealgorithms on the following examples using the built-inprecision in Mathematica 8

Example 7 Let us consider the large complex sparse matrixof the size 30000 times 30000 with 79512 nonzero elementspossessing a sparse inverse as in Algorithm 1 (119868 = radicminus1)where the matrix plot of 119860 showing its structure has beendrawn in Figure 1(a)

Methods like (9) are powerful in finding an approximateinverse or in finding robust approximate inverse precondi-tioners in low number of steps in which the output form ofthe approximate inverse is also sparse

In this test problem the stopping criterion is 119868 minus 119881119899

1198601

le

10minus7 Table 1 shows the number of iterations for different

methods As the programs were running we found therunning time using the command Absolute Timing[] toreport the elapsedCPU time (in seconds) for this experimentNote that the initial guess has been constructed using 119881

0

=

diag(111988611

111988622

1119886119898119898

) in Example 7 The plot of theapproximate inverse obtained by applying (9) has beenportrayed in Figure 1(b) The reason which made (4) theworsemethod is in the fact of computingmatrix power whichis time consuming for sparse matrices In the tables of thispaper ldquoNo of nonzero elerdquo stands for the number of nonzeroelements

In Algorithm 2 we provide the written Mathematica 8code of the new scheme (9) in this example to clearly revealsthe simplicity of the new scheme in finding approximateinverses using a threshold Chop[exp 10minus10]

Example 8 Let us consider a large sparse 10000 times 10000

matrix (with 18601 nonzero elements) as shown inAlgorithm 3

Table 2 shows the number of iterations for differentmeth-ods in order to reveal the efficiency of the proposed iterationwith a special attention to the elapsed timeThematrix plot of119860 in this case showing its structure alongside its approximateinverse has been drawn in Figure 2 respectively Note that

in Example 8 we have used an initial value constructed by1198810

= 119860lowast

(1198601

119860infin

) The stopping criterion is same as theprevious example

The new method (9) is totally better in terms of thenumber of iterations and the computational time In thispaper the computer specifications are Microsoft WindowsXP Intel(R) Pentium(R) 4 CPU 320GHz and 4GBof RAM

Matrices used up to now are large and sparse enoughto clarify the applicability of the new scheme Anyhow forsome classes of problems there are collections of matricesthat are used when proving new iterative linear systemsolvers or new preconditioners such as Matrix MarkethttpmathnistgovMatrixMarket In the following test wecompare the computational time of solving a practical prob-lem using the above source

Example 9 Consider the linear system of 119860119909 = 119887 in which119860 is defined by A = Example Data [ldquoMatrixrdquo ldquoBaipde900rdquo]and the right hand side vector is b = Table [1 i 1 900] InTable 3 we compare the consuming time (in seconds) of solv-ing this system by GMRES solver and its left preconditionedsystem 119881

1

119860119909 = 1198811

119887 In Table 3 for example PGMRES-(9) shows that the linear system 119881

1

119860119909 = 1198811

119887 in which 1198811

has been calculated by (9) is solved by GMRES solver Thetolerance is the residual norm to be less than 10minus14 in this testThe results support fully the efficiency of the new scheme (9)in constructing robust preconditioners

Example 10 This experiment evaluates the applicability of thenewmethod for findingMoore-Penrose inverse of 20 randomsparse matrices (possessing sparse pseudo-inverses)119898 times 119896 =

1200 times 1500 as shown in Algorithm 4

Note that the identity matrix should then be an119898 times 119898 matrix and the approximate pseudoinverseswould be of the size 119896 times 119898 In this example the initialapproximate Moore-Penrose inverse is constructedby V0

= ConjugateTranspose [A[j]] lowast (1((Singular

ValueList[A[j] 1] [[1]])2)) for each random test matrixAnd the stopping criterion is ||Vn+1 minus Vn||1 le 10

minus8 We onlycompared methods (1) denoted by ldquoSchulzrdquo (3) denoted byldquoKMSrdquo and the new iterative scheme (9) denoted by ldquoPMrdquo

Journal of Applied Mathematics 7

m = 1200 k = 1500 number = 20

Table[A[l] = SparseArray[Band[400 1 m k] -gt Random[] minus I

Band[1 200 m k] -gt 11 -Random[]

Band[minus95 100] -gt minus002 Band[minus100 500] -gt 01

m k 0] l number]

Algorithm 4

5 10 15 20

50

40

30

20

10

0

Matrices

Num

ber o

f ite

ratio

ns

Schulz

PMKSM

Figure 3The results of comparisons for Example 10 in terms of thenumber of iterations

The results for the number of iterations and the runningtime are compared in Figures 3-4 They show a clear advan-tage of the new scheme in finding theMoore-Penrose inverse

5 Conclusions

In this paper we have developed an iterative method ininverse finding of matrices which has the capability to beapplied on real or complex matrices with preserving thesparsity feature in matrix-by-matrix multiplications using athreshold This allows us to interestingly reduce the com-putational time and usage memory in applying the newiterative method We have shown that the suggested method(9) reaches ninth order of convergence The extension ofthe scheme for pseudoinverse has also been discussed in thepaper The applicability of the new scheme was illustratednumerically in Section 4

Finally according to the numerical results obtained andthe concrete application of such solvers we can conclude thatthe new method is efficient

The new scheme (9) has been tested for matrix inversionHowever we believe that such iterations could also be appliedfor finding the inverse of an integer in modulo 119901

119899 [34]Following this idea will for sure open a new topic of researchwhich is a combination of Numerical Analysis and NumberTheory We will focus on this trend of research for futurestudies

5 10 15 20

30

25

20

15

10

05

00

MatricesC

ompu

tatio

nal t

ime (

s)

Schulz

PMKSM

Figure 4The results of comparisons for Example 10 in terms of theelapsed time

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgment

The authors thank the referees for their valuable commentsand for the suggestions to improve the readability of thepaper

References

[1] R B Sidje and Y Saad ldquoRational approximation to the Fermi-Dirac function with applications in density functional theoryrdquoNumerical Algorithms vol 56 no 3 pp 455ndash479 2011

[2] F Soleymani and P S Stanimirovic ldquoA higher order iterativemethod for computing the Drazin inverserdquoThe Scientific WorldJournal vol 2013 Article ID 708647 11 pages 2013

[3] G Schulz ldquoIterative Berechnung der Reziproken matrixrdquoZeitschrift fur angewandte Mathematik und Mechanik vol 13pp 57ndash59 1933

[4] L Lin J Lu R Car and EWeinan ldquoMultipole representation ofthe Fermi operator with application to the electronic structureanalysis of metallic systemsrdquo Physical Review BmdashCondensedMatter and Materials Physics vol 79 no 11 Article ID 1151332009

8 Journal of Applied Mathematics

[5] W Li and Z Li ldquoA family of iterative methods for computingthe approximate inverse of a square matrix and inner inverse ofa non-square matrixrdquo Applied Mathematics and Computationvol 215 no 9 pp 3433ndash3442 2010

[6] H-B Li T-Z Huang Y Zhang X-P Liu and T-X GuldquoChebyshev-type methods and preconditioning techniquesrdquoApplied Mathematics and Computation vol 218 no 2 pp 260ndash270 2011

[7] W Hackbusch B N Khoromskij and E E TyrtyshnikovldquoApproximate iterations for structured matricesrdquo NumerischeMathematik vol 109 no 3 pp 365ndash383 2008

[8] T Soderstrom and GW Stewart ldquoOn the numerical propertiesof an iterative method for computing the Moore-Penrosegeneralized inverserdquo SIAM Journal on Numerical Analysis vol11 pp 61ndash74 1974

[9] X Liu and Y Qin ldquoSuccessive matrix squaring algorithm forcomputing the generalized inverse 119860

(2)

119879119878

rdquo Journal of AppliedMathematics vol 2012 Article ID 262034 12 pages 2012

[10] X Liu and S Huang ldquoProper splitting for the generalizedinverse119860(2)

119879119878

and its application on Banach spacesrdquoAbstract andApplied Analysis vol 2012 Article ID 736929 9 pages 2012

[11] X Liu andFHuang ldquoHigher-order convergent iterativemethodfor computing the generalized inverse over Banach spacesrdquoAbstract and Applied Analysis vol 2013 Article ID 356105 5pages 2013

[12] X Liu H Jin and Y Yu ldquoHigher-order convergent iterativemethod for computing the generalized inverse and its applica-tion to Toeplitz matricesrdquo Linear Algebra and Its Applicationsvol 439 no 6 pp 1635ndash1650 2013

[13] J Rajagopalan An iterative algortihm for inversion of matrices[MS thesis] Concordia University Quebec Canada 1996

[14] L Grosz ldquoPreconditioning by incomplete block eliminationrdquoNumerical Linear Algebra with Applications vol 7 no 7-8 pp527ndash541 2000

[15] G Codevico V Y Pan and M Van Barel ldquoNewton-likeiteration based on a cubic polynomial for structured matricesrdquoNumerical Algorithms vol 36 no 4 pp 365ndash380 2004

[16] V Pan and R Schreiber ldquoAn improved Newton iteration forthe generalized inverse of a matrix with applicationsrdquo SIAMJournal on Scientific and Statistical Computing vol 12 no 5 pp1109ndash1130 1991

[17] A Ben-Israel and T N E Greville Generalized InversesSpringer 2nd edition 2003

[18] F Khaksar Haghani and F Soleymani ldquoA new high-order stablenumerical method for matrix inversionrdquo The Scientific WorldJournal vol 2014 Article ID 830564 10 pages 2014

[19] F Soleymani S Karimi Vanani H I Siyyam and I AAl-Subaihi ldquoNumerical solution of nonlinear equations byan optimal eighth-order class of iterative methodsrdquo AnnalidellrsquoUniversita di Ferrara Sezione VII ScienzeMatematiche vol59 no 1 pp 159ndash171 2013

[20] X Wang and T Zhang ldquoHigh-order Newton-type iterativemethods with memory for solving nonlinear equationsrdquoMath-ematical Communications vol 19 pp 91ndash109 2014

[21] H Hotelling ldquoAnalysis of a complex of statistical variables intoprincipal componentsrdquo Journal of Educational Psychology vol24 no 6 pp 417ndash441 1933

[22] E Isaacson and H B Keller Analysis of Numerical MethodsJohn Wiley amp Sons New York NY USA 1966

[23] E V Krishnamurthy and S K Sen Numerical AlgorithmsmdashComputations in Science and Engineering Affiliated East-WestPress New Delhi India 2007

[24] L Weiguo L Juan and Q Tiantian ldquoA family of iterativemethods for computing Moore-Penrose inverse of a matrixrdquoLinear Algebra and Its Applications vol 438 no 1 pp 47ndash562013

[25] F Soleymani ldquoA fast convergent iterative solver for approximateinverse of matricesrdquo Numerical Linear Algebra with Applica-tions 2013

[26] M Zaka Ullah F Soleymani and A S Al-Fhaid ldquoAn effi-cient matrix iteration for computing weighted Moore-Penroseinverserdquo Applied Mathematics and Computation vol 226 pp441ndash454 2014

[27] VY Pan StructuredMatrices andPolynomials Unified SuperfastAlgorithms Springer Birkhaauser Boston Mass USA 2001

[28] M Miladinovic S Miljkovic and P Stanimirovic ldquoModifiedSMSmethod for computing outer inverses of ToeplitzmatricesrdquoApplied Mathematics and Computation vol 218 no 7 pp 3131ndash3143 2011

[29] H Chen and Y Wang ldquoA family of higher-order convergentiterative methods for computing the Moore-Penrose inverserdquoAppliedMathematics and Computation vol 218 no 8 pp 4012ndash4016 2011

[30] A R Soheili F Soleymani and M D Petkovic ldquoOn the com-putation of weightedMoore-Penrose inverse using a high-ordermatrix methodrdquo Computers amp Mathematics with Applicationsvol 66 no 11 pp 2344ndash2351 2013

[31] F Soleymani and P S Stanimirovic ldquoA note on the stability ofa 119901th order iteration for finding generalized inversesrdquo AppliedMathematics Letters vol 28 pp 77ndash81 2014

[32] P S Stanimirovic and F Soleymani ldquoA class of numerical algo-rithms for computing outer inversesrdquo Journal of Computationaland Applied Mathematics vol 263 pp 236ndash245 2014

[33] S Wolfram The Mathematica Book Wolfram Media 5th edi-tion 2003

[34] M P Knapp and C Xenophontos ldquoNumerical analysis meetsnumber theory using rootfindingmethods to calculate inversesmod 119901119899rdquo Applicable Analysis and Discrete Mathematics vol 4no 1 pp 23ndash31 2010

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 3: Research Article Approximating the Inverse of a Square ...downloads.hindawi.com/journals/jam/2014/731562.pdf · our proposed scheme for generalized outer inverses, that is, (2), ,

Journal of Applied Mathematics 3

to reach at least the convergence order 9 is practical First byfollowing the comparative index of informational efficiencyfor inverse finders [25] defined by 119864 = 120588120579 wherein 120588 and120579 stand for the convergence order and the number of matrix-matrix products then the informational index for (9) that is97 asymp 128 beats its other competitors 22 = 1 of (1) 33 = 1of (2)-(3) and 34 = 075 of (4) And second the significanceof the new scheme will be displayed in its implementationTo illustrate further such iterations depend totally on theinitial matrices Though there are certain and efficient waysfor finding 119881

0

in general such initial approximations take ahigh number of iterations (see eg Figure 3 the blue color) toarrive at the convergence phaseOn the other hand each cycleof the implementation of such Schulz-type methods includesone stopping criterion based on the use of amatrix norm andthis would impose further burden and load in general for thelow order methods in contrast to the high order methodssuch as (9) Because the computation of a matrix norm(usually sdot

2

for dense complex matrices and sdot 119865

for largesparse matrices) takes time and therefore higher number ofstepsiterations (which is the result of lower order methods)it will be costlier than the lower number of stepsiterationsof high order methods Hence the higher order (efficient)iterations are mostly better solvers in terms of computationaltime to achieve the desired accuracy

After this complete discussion of the need and efficacy ofthe new scheme (9) we are about to prove the convergencebehavior of (9) theoretically in what follows

Theorem 1 Let119860 = [119886119894119895

]119898times119898 be a nonsingular complex or real

matrix If the initial approximation 1198810

satisfies1003817100381710038171003817119868 minus 1198601198810

1003817100381710038171003817 lt 1 (10)

then the iterativemethod (9) converges with at least ninth orderto 119860minus1

Proof Let (10) hold and for the sake of simplicity assume that1198640

= 119868 minus 1198601198810

and 119864119899

= 119868 minus 119860119881119899

It is straightforward to have

119864119899+1

= 119868 minus 119860119881119899+1

=1

4[31198649

119899

+ 11986410

119899

] (11)

Hence by taking an arbitrary norm from both sides of (11)we obtain

1003817100381710038171003817119864119899+11003817100381710038171003817 le

1

4(31003817100381710038171003817119864119899

1003817100381710038171003817

9

+1003817100381710038171003817119864119899

1003817100381710038171003817

10

) (12)

The rest of the proof is similar to Theorem 21 of [26] andit is hence omitted Finally 119868 minus 119860119881

119899

rarr 0 when 119899 rarr infinand thus 119881

119899

rarr 119860minus1 as 119899 rarr infin So the sequence 119864

119899

2

is strictly monotonic decreasing Moreover if we denote 120598119899

=

119881119899

minus 119860minus1 as the error matrix in the iterative procedure (9)

then we could easily obtain the following error inequality

1003817100381710038171003817120598119899+11003817100381710038171003817 le (

1

4[3119860

8

+ 11986091003817100381710038171003817120598119899

1003817100381710038171003817])1003817100381710038171003817120598119899

1003817100381710038171003817

9

(13)

The error inequality (13) reveals that the iteration (9) con-verges with at least ninth order to 119860

minus1 The proof is nowcomplete

Some features of the new scheme (9) are as follows

(1) The new method can be taken into considerationfor finding approximate inverse (preconditioners) ofcomplex matrices Using a dropping strategy wecould keep the sparsity of a matrix to be usedas a preconditioner Such an action will be usedthroughout this paper for sparse matrices using thecommand Chop[exp tolerance] in Mathematica topreserve the sparsity of the approximate inverses 119881

119894

for any 119894

(2) Unlike the traditional direct solvers such as GEPP thenew scheme has the ability to be parallelized

(3) In order to further reduce the computational burdenof the matrix-by-matrix multiplications per comput-ing step of the new algorithm when dealing withsparse matrices in the new scheme by using SparseArray[] command in Mathematica we will preservethe sparsity features of the output approximate inversein a reasonable computational time Additionallyfor some special matrices such as Toeplitz or Van-dermonde types of matrices further computationalsavings are possible as discussed by Pan in [27] Seefor more information [28]

(4) If the initial guess commutes with the original matrixthen all matrices in the sequence 119881

119899

119899=infin

119899=0

satisfythe same property of commutation Notice that onlysome initial matrices guarantee that in all cases thesequence commutes with the matrix 119860

In the next section we present some analytical discussionabout the fact that the new method (9) could also be used forcomputing the Moore-Penrose generalized inverse

3 An Iterative Method forMoore-Penrose Inverse

It is well known in the literature that the Moore-Penroseinverse of a complex matrix 119860 isin C119898times119896 (also called pseudo-inverse) denoted by 119860dagger isin C119896times119898 is a matrix 119881 isin C119896times119898

satisfying the following conditions

119860119881119860 = 119860 119881119860119881 = 119881

(119860119881)lowast

= 119860119881 (119881119860)lowast

= 119881119860

(14)

wherein 119860lowast is the conjugate transpose of 119860 This inverse

uniquely existsVarious numerical solution methods have been devel-

oped for approximating Moore-Penrose inverse (see eg[29]) The most comprehensive review of such iterationscould be found in [17] while the authors mentioned thatiterative Schulz-type methods can be taken into account forfinding pseudoinverse For example it is known that (1)converges to the pseudoinverse in the general case if 119881

0

=

120572119860lowast where 0 lt 120572 lt 2120588(119860

lowast

119860) and 120588(sdot) denotes the spectralradius

Accordingly it is known that the new scheme (9) con-verges to the Moore-Penrose inverse In order validate this

4 Journal of Applied Mathematics

n = 30000

A = SparseArray[Band[195 10000] -gt minusI Band[1 1] -gt 19

Band[1000 2500] -gt 21 Band[minus60 minus1800] -gt 11

Band[minus600 170] -gt 2 + I Band[minus1350 250] -gt minus53

n n 0]

Algorithm 1

1

10000

20000

30000

1

10000

20000

30000

1 10000 20000 30000

1 10000 20000 30000

(a)

1

10000

20000

30000

1

10000

20000

30000

1 10000 20000 30000

1 10000 20000 30000

(b)

Figure 1 The matrix plot (a) and the inverse (b) of the large complex sparse 30000 times 30000matrix 119860 in Example 7

fact analytically we provide some important theoretics inwhat follows

Lemma 2 For the sequence 119881119899

119899=infin

119899=0

generated by the iterativeSchulz-type method (9) and the initial matrix (6) for any 119899 ge0 it holds that

(119860119881119899

)lowast

= 119860119881119899

(119881119899

119860)lowast

= 119881119899

119860

119881119899

119860119860dagger

= 119881119899

119860dagger

119860119881119899

= 119881119899

(15)

Proof The proof of this lemma is based on mathematicalinduction Such a procedure is similar to Lemma 31 of [30]and it is hence omitted

Theorem 3 For the rectangular complex matrix 119860 isin C119898times119896

and the sequence 119881119899

119899=infin

119899=0

generated by (9) for any 119899 ge 0using the initial approximation (6) the sequence converges tothe pseudoinverse 119860dagger with ninth order of convergence

Proof We consider E119899

= 119881119899

minus 119860dagger as the error matrix for

finding theMoore-Penrose inverseThen the proof is similarto the proof of Theorem 4 in [18] and it is hence omitted Wefinally have

1003817100381710038171003817E119899+11003817100381710038171003817 le

10038171003817100381710038171003817119860dagger

1003817100381710038171003817100381711986091003817100381710038171003817E119899

1003817100381710038171003817

9

(16)

Thus 119881119899

minus 119860dagger

rarr 0 that is the obtained sequence of (9)converges to the Moore-Penrose inverse as 119899 rarr +infin Thisends the proof

Theorem 4 Considering the same assumptions as inTheorem 3 then the iterative method (9) has asymptoticalstability for finding the Moore-Penrose generalized inverse

Proof The steps of proving the asymptotical stability of (9)are similar to those which have recently been taken fora general family of methods in [31] Hence the proof isomitted

Remark 5 It should be remarked that the generalization ofour proposed scheme for generalized outer inverses that is119860(2)

119879119878

is straight forward according to the recent work [32]

Remark 6 The new iteration (9) is free frommatrix power inits implementation and this allows one to apply it for findinggeneralized inverses easily

4 Computational Experiments

Using the computer algebra system Mathematica 8 [33] wenow compare our iterative algorithm with other Schulz-typemethods

Journal of Applied Mathematics 5

Table 1 Results of comparisons for Example 7

Methods(1) (3) (4) (9)

Convergence rate 2 3 3 7Iteration number 3 2 2 1Residual norm 832717 times 10

minus7

121303 times 10minus7

510014 times 10minus8

97105 times 10minus8

No of nonzero ele 591107 720849 800689 762847Time (in seconds) 422 405 932 418

Id = SparseArray[i i -gt 1 n n]

V = DiagonalMatrixSparseArray[1Normal[Diagonal[A]]]

Do[V1 = SparseArray[V]

V2 = Chop[AV1] V3 = 3 Id + V2(minus3 Id + V2)

V4 = SparseArray[V2V3]

V = Chop[minus(14) V1V3SparseArray[minus13 Id

+ V4(15 Id + V4(minus7 Id + V4))]]

Print[V]L[i]= N[Norm[Id minus VA 1]]

Print[The residual norm is

Column[i Frame -gt All FrameStyle -gt Directive[Blue]]

Column[L[i] Frame -gt All FrameStyle -gt Directive[Blue]]]

i 1] AbsoluteTiming

Algorithm 2

n = 10000

A = SparseArray[Band[minus700 minus200] -gt 1 Band[1 1] -gt minus15

Band[1 minus400] -gt 9 Band[1 minus400] -gt minus1

Band[2000 200] -gt 1 n n 0]

Algorithm 3

1

2000

4000

6000

8000

10000

1

2000

4000

6000

8000

10000

1 2000 4000 6000 8000 10000

1 2000 4000 6000 8000 10000

(a)

1

2000

4000

6000

8000

10000

1

2000

4000

6000

8000

10000

1 2000 4000 6000 8000 10000

1 2000 4000 6000 8000 10000

(b)

Figure 2 The matrix plot (a) and the inverse (b) of the large sparse 10000 times 10000matrix 119860 in Example 8

6 Journal of Applied Mathematics

Table 2 Results of comparisons for Example 8

Methods(1) (3) (4) (9)

Convergence rate 2 3 3 7Iteration number 10 7 6 3Residual norm 129011 times 10

minus10

163299 times 10minus9

118619 times 10minus11

768192 times 10minus10

No of nonzero ele 41635 42340 41635 41635Time (in seconds) 223 215 226 195

Table 3 Results of elapsed computational time for solving Example 9

Methods GMRES PGMRES-(1) PGMRES-(3) PGMRES-(4) PGMRES-(9)The total time 029 014 014 009 007

For numerical comparisons in this section we have usedthe methods (1) (3) (4) and (9) We implement the abovealgorithms on the following examples using the built-inprecision in Mathematica 8

Example 7 Let us consider the large complex sparse matrixof the size 30000 times 30000 with 79512 nonzero elementspossessing a sparse inverse as in Algorithm 1 (119868 = radicminus1)where the matrix plot of 119860 showing its structure has beendrawn in Figure 1(a)

Methods like (9) are powerful in finding an approximateinverse or in finding robust approximate inverse precondi-tioners in low number of steps in which the output form ofthe approximate inverse is also sparse

In this test problem the stopping criterion is 119868 minus 119881119899

1198601

le

10minus7 Table 1 shows the number of iterations for different

methods As the programs were running we found therunning time using the command Absolute Timing[] toreport the elapsedCPU time (in seconds) for this experimentNote that the initial guess has been constructed using 119881

0

=

diag(111988611

111988622

1119886119898119898

) in Example 7 The plot of theapproximate inverse obtained by applying (9) has beenportrayed in Figure 1(b) The reason which made (4) theworsemethod is in the fact of computingmatrix power whichis time consuming for sparse matrices In the tables of thispaper ldquoNo of nonzero elerdquo stands for the number of nonzeroelements

In Algorithm 2 we provide the written Mathematica 8code of the new scheme (9) in this example to clearly revealsthe simplicity of the new scheme in finding approximateinverses using a threshold Chop[exp 10minus10]

Example 8 Let us consider a large sparse 10000 times 10000

matrix (with 18601 nonzero elements) as shown inAlgorithm 3

Table 2 shows the number of iterations for differentmeth-ods in order to reveal the efficiency of the proposed iterationwith a special attention to the elapsed timeThematrix plot of119860 in this case showing its structure alongside its approximateinverse has been drawn in Figure 2 respectively Note that

in Example 8 we have used an initial value constructed by1198810

= 119860lowast

(1198601

119860infin

) The stopping criterion is same as theprevious example

The new method (9) is totally better in terms of thenumber of iterations and the computational time In thispaper the computer specifications are Microsoft WindowsXP Intel(R) Pentium(R) 4 CPU 320GHz and 4GBof RAM

Matrices used up to now are large and sparse enoughto clarify the applicability of the new scheme Anyhow forsome classes of problems there are collections of matricesthat are used when proving new iterative linear systemsolvers or new preconditioners such as Matrix MarkethttpmathnistgovMatrixMarket In the following test wecompare the computational time of solving a practical prob-lem using the above source

Example 9 Consider the linear system of 119860119909 = 119887 in which119860 is defined by A = Example Data [ldquoMatrixrdquo ldquoBaipde900rdquo]and the right hand side vector is b = Table [1 i 1 900] InTable 3 we compare the consuming time (in seconds) of solv-ing this system by GMRES solver and its left preconditionedsystem 119881

1

119860119909 = 1198811

119887 In Table 3 for example PGMRES-(9) shows that the linear system 119881

1

119860119909 = 1198811

119887 in which 1198811

has been calculated by (9) is solved by GMRES solver Thetolerance is the residual norm to be less than 10minus14 in this testThe results support fully the efficiency of the new scheme (9)in constructing robust preconditioners

Example 10 This experiment evaluates the applicability of thenewmethod for findingMoore-Penrose inverse of 20 randomsparse matrices (possessing sparse pseudo-inverses)119898 times 119896 =

1200 times 1500 as shown in Algorithm 4

Note that the identity matrix should then be an119898 times 119898 matrix and the approximate pseudoinverseswould be of the size 119896 times 119898 In this example the initialapproximate Moore-Penrose inverse is constructedby V0

= ConjugateTranspose [A[j]] lowast (1((Singular

ValueList[A[j] 1] [[1]])2)) for each random test matrixAnd the stopping criterion is ||Vn+1 minus Vn||1 le 10

minus8 We onlycompared methods (1) denoted by ldquoSchulzrdquo (3) denoted byldquoKMSrdquo and the new iterative scheme (9) denoted by ldquoPMrdquo

Journal of Applied Mathematics 7

m = 1200 k = 1500 number = 20

Table[A[l] = SparseArray[Band[400 1 m k] -gt Random[] minus I

Band[1 200 m k] -gt 11 -Random[]

Band[minus95 100] -gt minus002 Band[minus100 500] -gt 01

m k 0] l number]

Algorithm 4

5 10 15 20

50

40

30

20

10

0

Matrices

Num

ber o

f ite

ratio

ns

Schulz

PMKSM

Figure 3The results of comparisons for Example 10 in terms of thenumber of iterations

The results for the number of iterations and the runningtime are compared in Figures 3-4 They show a clear advan-tage of the new scheme in finding theMoore-Penrose inverse

5 Conclusions

In this paper we have developed an iterative method ininverse finding of matrices which has the capability to beapplied on real or complex matrices with preserving thesparsity feature in matrix-by-matrix multiplications using athreshold This allows us to interestingly reduce the com-putational time and usage memory in applying the newiterative method We have shown that the suggested method(9) reaches ninth order of convergence The extension ofthe scheme for pseudoinverse has also been discussed in thepaper The applicability of the new scheme was illustratednumerically in Section 4

Finally according to the numerical results obtained andthe concrete application of such solvers we can conclude thatthe new method is efficient

The new scheme (9) has been tested for matrix inversionHowever we believe that such iterations could also be appliedfor finding the inverse of an integer in modulo 119901

119899 [34]Following this idea will for sure open a new topic of researchwhich is a combination of Numerical Analysis and NumberTheory We will focus on this trend of research for futurestudies

5 10 15 20

30

25

20

15

10

05

00

MatricesC

ompu

tatio

nal t

ime (

s)

Schulz

PMKSM

Figure 4The results of comparisons for Example 10 in terms of theelapsed time

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgment

The authors thank the referees for their valuable commentsand for the suggestions to improve the readability of thepaper

References

[1] R B Sidje and Y Saad ldquoRational approximation to the Fermi-Dirac function with applications in density functional theoryrdquoNumerical Algorithms vol 56 no 3 pp 455ndash479 2011

[2] F Soleymani and P S Stanimirovic ldquoA higher order iterativemethod for computing the Drazin inverserdquoThe Scientific WorldJournal vol 2013 Article ID 708647 11 pages 2013

[3] G Schulz ldquoIterative Berechnung der Reziproken matrixrdquoZeitschrift fur angewandte Mathematik und Mechanik vol 13pp 57ndash59 1933

[4] L Lin J Lu R Car and EWeinan ldquoMultipole representation ofthe Fermi operator with application to the electronic structureanalysis of metallic systemsrdquo Physical Review BmdashCondensedMatter and Materials Physics vol 79 no 11 Article ID 1151332009

8 Journal of Applied Mathematics

[5] W Li and Z Li ldquoA family of iterative methods for computingthe approximate inverse of a square matrix and inner inverse ofa non-square matrixrdquo Applied Mathematics and Computationvol 215 no 9 pp 3433ndash3442 2010

[6] H-B Li T-Z Huang Y Zhang X-P Liu and T-X GuldquoChebyshev-type methods and preconditioning techniquesrdquoApplied Mathematics and Computation vol 218 no 2 pp 260ndash270 2011

[7] W Hackbusch B N Khoromskij and E E TyrtyshnikovldquoApproximate iterations for structured matricesrdquo NumerischeMathematik vol 109 no 3 pp 365ndash383 2008

[8] T Soderstrom and GW Stewart ldquoOn the numerical propertiesof an iterative method for computing the Moore-Penrosegeneralized inverserdquo SIAM Journal on Numerical Analysis vol11 pp 61ndash74 1974

[9] X Liu and Y Qin ldquoSuccessive matrix squaring algorithm forcomputing the generalized inverse 119860

(2)

119879119878

rdquo Journal of AppliedMathematics vol 2012 Article ID 262034 12 pages 2012

[10] X Liu and S Huang ldquoProper splitting for the generalizedinverse119860(2)

119879119878

and its application on Banach spacesrdquoAbstract andApplied Analysis vol 2012 Article ID 736929 9 pages 2012

[11] X Liu andFHuang ldquoHigher-order convergent iterativemethodfor computing the generalized inverse over Banach spacesrdquoAbstract and Applied Analysis vol 2013 Article ID 356105 5pages 2013

[12] X Liu H Jin and Y Yu ldquoHigher-order convergent iterativemethod for computing the generalized inverse and its applica-tion to Toeplitz matricesrdquo Linear Algebra and Its Applicationsvol 439 no 6 pp 1635ndash1650 2013

[13] J Rajagopalan An iterative algortihm for inversion of matrices[MS thesis] Concordia University Quebec Canada 1996

[14] L Grosz ldquoPreconditioning by incomplete block eliminationrdquoNumerical Linear Algebra with Applications vol 7 no 7-8 pp527ndash541 2000

[15] G Codevico V Y Pan and M Van Barel ldquoNewton-likeiteration based on a cubic polynomial for structured matricesrdquoNumerical Algorithms vol 36 no 4 pp 365ndash380 2004

[16] V Pan and R Schreiber ldquoAn improved Newton iteration forthe generalized inverse of a matrix with applicationsrdquo SIAMJournal on Scientific and Statistical Computing vol 12 no 5 pp1109ndash1130 1991

[17] A Ben-Israel and T N E Greville Generalized InversesSpringer 2nd edition 2003

[18] F Khaksar Haghani and F Soleymani ldquoA new high-order stablenumerical method for matrix inversionrdquo The Scientific WorldJournal vol 2014 Article ID 830564 10 pages 2014

[19] F Soleymani S Karimi Vanani H I Siyyam and I AAl-Subaihi ldquoNumerical solution of nonlinear equations byan optimal eighth-order class of iterative methodsrdquo AnnalidellrsquoUniversita di Ferrara Sezione VII ScienzeMatematiche vol59 no 1 pp 159ndash171 2013

[20] X Wang and T Zhang ldquoHigh-order Newton-type iterativemethods with memory for solving nonlinear equationsrdquoMath-ematical Communications vol 19 pp 91ndash109 2014

[21] H Hotelling ldquoAnalysis of a complex of statistical variables intoprincipal componentsrdquo Journal of Educational Psychology vol24 no 6 pp 417ndash441 1933

[22] E Isaacson and H B Keller Analysis of Numerical MethodsJohn Wiley amp Sons New York NY USA 1966

[23] E V Krishnamurthy and S K Sen Numerical AlgorithmsmdashComputations in Science and Engineering Affiliated East-WestPress New Delhi India 2007

[24] L Weiguo L Juan and Q Tiantian ldquoA family of iterativemethods for computing Moore-Penrose inverse of a matrixrdquoLinear Algebra and Its Applications vol 438 no 1 pp 47ndash562013

[25] F Soleymani ldquoA fast convergent iterative solver for approximateinverse of matricesrdquo Numerical Linear Algebra with Applica-tions 2013

[26] M Zaka Ullah F Soleymani and A S Al-Fhaid ldquoAn effi-cient matrix iteration for computing weighted Moore-Penroseinverserdquo Applied Mathematics and Computation vol 226 pp441ndash454 2014

[27] VY Pan StructuredMatrices andPolynomials Unified SuperfastAlgorithms Springer Birkhaauser Boston Mass USA 2001

[28] M Miladinovic S Miljkovic and P Stanimirovic ldquoModifiedSMSmethod for computing outer inverses of ToeplitzmatricesrdquoApplied Mathematics and Computation vol 218 no 7 pp 3131ndash3143 2011

[29] H Chen and Y Wang ldquoA family of higher-order convergentiterative methods for computing the Moore-Penrose inverserdquoAppliedMathematics and Computation vol 218 no 8 pp 4012ndash4016 2011

[30] A R Soheili F Soleymani and M D Petkovic ldquoOn the com-putation of weightedMoore-Penrose inverse using a high-ordermatrix methodrdquo Computers amp Mathematics with Applicationsvol 66 no 11 pp 2344ndash2351 2013

[31] F Soleymani and P S Stanimirovic ldquoA note on the stability ofa 119901th order iteration for finding generalized inversesrdquo AppliedMathematics Letters vol 28 pp 77ndash81 2014

[32] P S Stanimirovic and F Soleymani ldquoA class of numerical algo-rithms for computing outer inversesrdquo Journal of Computationaland Applied Mathematics vol 263 pp 236ndash245 2014

[33] S Wolfram The Mathematica Book Wolfram Media 5th edi-tion 2003

[34] M P Knapp and C Xenophontos ldquoNumerical analysis meetsnumber theory using rootfindingmethods to calculate inversesmod 119901119899rdquo Applicable Analysis and Discrete Mathematics vol 4no 1 pp 23ndash31 2010

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 4: Research Article Approximating the Inverse of a Square ...downloads.hindawi.com/journals/jam/2014/731562.pdf · our proposed scheme for generalized outer inverses, that is, (2), ,

4 Journal of Applied Mathematics

n = 30000

A = SparseArray[Band[195 10000] -gt minusI Band[1 1] -gt 19

Band[1000 2500] -gt 21 Band[minus60 minus1800] -gt 11

Band[minus600 170] -gt 2 + I Band[minus1350 250] -gt minus53

n n 0]

Algorithm 1

1

10000

20000

30000

1

10000

20000

30000

1 10000 20000 30000

1 10000 20000 30000

(a)

1

10000

20000

30000

1

10000

20000

30000

1 10000 20000 30000

1 10000 20000 30000

(b)

Figure 1 The matrix plot (a) and the inverse (b) of the large complex sparse 30000 times 30000matrix 119860 in Example 7

fact analytically we provide some important theoretics inwhat follows

Lemma 2 For the sequence 119881119899

119899=infin

119899=0

generated by the iterativeSchulz-type method (9) and the initial matrix (6) for any 119899 ge0 it holds that

(119860119881119899

)lowast

= 119860119881119899

(119881119899

119860)lowast

= 119881119899

119860

119881119899

119860119860dagger

= 119881119899

119860dagger

119860119881119899

= 119881119899

(15)

Proof The proof of this lemma is based on mathematicalinduction Such a procedure is similar to Lemma 31 of [30]and it is hence omitted

Theorem 3 For the rectangular complex matrix 119860 isin C119898times119896

and the sequence 119881119899

119899=infin

119899=0

generated by (9) for any 119899 ge 0using the initial approximation (6) the sequence converges tothe pseudoinverse 119860dagger with ninth order of convergence

Proof We consider E119899

= 119881119899

minus 119860dagger as the error matrix for

finding theMoore-Penrose inverseThen the proof is similarto the proof of Theorem 4 in [18] and it is hence omitted Wefinally have

1003817100381710038171003817E119899+11003817100381710038171003817 le

10038171003817100381710038171003817119860dagger

1003817100381710038171003817100381711986091003817100381710038171003817E119899

1003817100381710038171003817

9

(16)

Thus 119881119899

minus 119860dagger

rarr 0 that is the obtained sequence of (9)converges to the Moore-Penrose inverse as 119899 rarr +infin Thisends the proof

Theorem 4 Considering the same assumptions as inTheorem 3 then the iterative method (9) has asymptoticalstability for finding the Moore-Penrose generalized inverse

Proof The steps of proving the asymptotical stability of (9)are similar to those which have recently been taken fora general family of methods in [31] Hence the proof isomitted

Remark 5 It should be remarked that the generalization ofour proposed scheme for generalized outer inverses that is119860(2)

119879119878

is straight forward according to the recent work [32]

Remark 6 The new iteration (9) is free frommatrix power inits implementation and this allows one to apply it for findinggeneralized inverses easily

4 Computational Experiments

Using the computer algebra system Mathematica 8 [33] wenow compare our iterative algorithm with other Schulz-typemethods

Journal of Applied Mathematics 5

Table 1 Results of comparisons for Example 7

Methods(1) (3) (4) (9)

Convergence rate 2 3 3 7Iteration number 3 2 2 1Residual norm 832717 times 10

minus7

121303 times 10minus7

510014 times 10minus8

97105 times 10minus8

No of nonzero ele 591107 720849 800689 762847Time (in seconds) 422 405 932 418

Id = SparseArray[i i -gt 1 n n]

V = DiagonalMatrixSparseArray[1Normal[Diagonal[A]]]

Do[V1 = SparseArray[V]

V2 = Chop[AV1] V3 = 3 Id + V2(minus3 Id + V2)

V4 = SparseArray[V2V3]

V = Chop[minus(14) V1V3SparseArray[minus13 Id

+ V4(15 Id + V4(minus7 Id + V4))]]

Print[V]L[i]= N[Norm[Id minus VA 1]]

Print[The residual norm is

Column[i Frame -gt All FrameStyle -gt Directive[Blue]]

Column[L[i] Frame -gt All FrameStyle -gt Directive[Blue]]]

i 1] AbsoluteTiming

Algorithm 2

n = 10000

A = SparseArray[Band[minus700 minus200] -gt 1 Band[1 1] -gt minus15

Band[1 minus400] -gt 9 Band[1 minus400] -gt minus1

Band[2000 200] -gt 1 n n 0]

Algorithm 3

1

2000

4000

6000

8000

10000

1

2000

4000

6000

8000

10000

1 2000 4000 6000 8000 10000

1 2000 4000 6000 8000 10000

(a)

1

2000

4000

6000

8000

10000

1

2000

4000

6000

8000

10000

1 2000 4000 6000 8000 10000

1 2000 4000 6000 8000 10000

(b)

Figure 2 The matrix plot (a) and the inverse (b) of the large sparse 10000 times 10000matrix 119860 in Example 8

6 Journal of Applied Mathematics

Table 2 Results of comparisons for Example 8

Methods(1) (3) (4) (9)

Convergence rate 2 3 3 7Iteration number 10 7 6 3Residual norm 129011 times 10

minus10

163299 times 10minus9

118619 times 10minus11

768192 times 10minus10

No of nonzero ele 41635 42340 41635 41635Time (in seconds) 223 215 226 195

Table 3 Results of elapsed computational time for solving Example 9

Methods GMRES PGMRES-(1) PGMRES-(3) PGMRES-(4) PGMRES-(9)The total time 029 014 014 009 007

For numerical comparisons in this section we have usedthe methods (1) (3) (4) and (9) We implement the abovealgorithms on the following examples using the built-inprecision in Mathematica 8

Example 7 Let us consider the large complex sparse matrixof the size 30000 times 30000 with 79512 nonzero elementspossessing a sparse inverse as in Algorithm 1 (119868 = radicminus1)where the matrix plot of 119860 showing its structure has beendrawn in Figure 1(a)

Methods like (9) are powerful in finding an approximateinverse or in finding robust approximate inverse precondi-tioners in low number of steps in which the output form ofthe approximate inverse is also sparse

In this test problem the stopping criterion is 119868 minus 119881119899

1198601

le

10minus7 Table 1 shows the number of iterations for different

methods As the programs were running we found therunning time using the command Absolute Timing[] toreport the elapsedCPU time (in seconds) for this experimentNote that the initial guess has been constructed using 119881

0

=

diag(111988611

111988622

1119886119898119898

) in Example 7 The plot of theapproximate inverse obtained by applying (9) has beenportrayed in Figure 1(b) The reason which made (4) theworsemethod is in the fact of computingmatrix power whichis time consuming for sparse matrices In the tables of thispaper ldquoNo of nonzero elerdquo stands for the number of nonzeroelements

In Algorithm 2 we provide the written Mathematica 8code of the new scheme (9) in this example to clearly revealsthe simplicity of the new scheme in finding approximateinverses using a threshold Chop[exp 10minus10]

Example 8 Let us consider a large sparse 10000 times 10000

matrix (with 18601 nonzero elements) as shown inAlgorithm 3

Table 2 shows the number of iterations for differentmeth-ods in order to reveal the efficiency of the proposed iterationwith a special attention to the elapsed timeThematrix plot of119860 in this case showing its structure alongside its approximateinverse has been drawn in Figure 2 respectively Note that

in Example 8 we have used an initial value constructed by1198810

= 119860lowast

(1198601

119860infin

) The stopping criterion is same as theprevious example

The new method (9) is totally better in terms of thenumber of iterations and the computational time In thispaper the computer specifications are Microsoft WindowsXP Intel(R) Pentium(R) 4 CPU 320GHz and 4GBof RAM

Matrices used up to now are large and sparse enoughto clarify the applicability of the new scheme Anyhow forsome classes of problems there are collections of matricesthat are used when proving new iterative linear systemsolvers or new preconditioners such as Matrix MarkethttpmathnistgovMatrixMarket In the following test wecompare the computational time of solving a practical prob-lem using the above source

Example 9 Consider the linear system of 119860119909 = 119887 in which119860 is defined by A = Example Data [ldquoMatrixrdquo ldquoBaipde900rdquo]and the right hand side vector is b = Table [1 i 1 900] InTable 3 we compare the consuming time (in seconds) of solv-ing this system by GMRES solver and its left preconditionedsystem 119881

1

119860119909 = 1198811

119887 In Table 3 for example PGMRES-(9) shows that the linear system 119881

1

119860119909 = 1198811

119887 in which 1198811

has been calculated by (9) is solved by GMRES solver Thetolerance is the residual norm to be less than 10minus14 in this testThe results support fully the efficiency of the new scheme (9)in constructing robust preconditioners

Example 10 This experiment evaluates the applicability of thenewmethod for findingMoore-Penrose inverse of 20 randomsparse matrices (possessing sparse pseudo-inverses)119898 times 119896 =

1200 times 1500 as shown in Algorithm 4

Note that the identity matrix should then be an119898 times 119898 matrix and the approximate pseudoinverseswould be of the size 119896 times 119898 In this example the initialapproximate Moore-Penrose inverse is constructedby V0

= ConjugateTranspose [A[j]] lowast (1((Singular

ValueList[A[j] 1] [[1]])2)) for each random test matrixAnd the stopping criterion is ||Vn+1 minus Vn||1 le 10

minus8 We onlycompared methods (1) denoted by ldquoSchulzrdquo (3) denoted byldquoKMSrdquo and the new iterative scheme (9) denoted by ldquoPMrdquo

Journal of Applied Mathematics 7

m = 1200 k = 1500 number = 20

Table[A[l] = SparseArray[Band[400 1 m k] -gt Random[] minus I

Band[1 200 m k] -gt 11 -Random[]

Band[minus95 100] -gt minus002 Band[minus100 500] -gt 01

m k 0] l number]

Algorithm 4

5 10 15 20

50

40

30

20

10

0

Matrices

Num

ber o

f ite

ratio

ns

Schulz

PMKSM

Figure 3The results of comparisons for Example 10 in terms of thenumber of iterations

The results for the number of iterations and the runningtime are compared in Figures 3-4 They show a clear advan-tage of the new scheme in finding theMoore-Penrose inverse

5 Conclusions

In this paper we have developed an iterative method ininverse finding of matrices which has the capability to beapplied on real or complex matrices with preserving thesparsity feature in matrix-by-matrix multiplications using athreshold This allows us to interestingly reduce the com-putational time and usage memory in applying the newiterative method We have shown that the suggested method(9) reaches ninth order of convergence The extension ofthe scheme for pseudoinverse has also been discussed in thepaper The applicability of the new scheme was illustratednumerically in Section 4

Finally according to the numerical results obtained andthe concrete application of such solvers we can conclude thatthe new method is efficient

The new scheme (9) has been tested for matrix inversionHowever we believe that such iterations could also be appliedfor finding the inverse of an integer in modulo 119901

119899 [34]Following this idea will for sure open a new topic of researchwhich is a combination of Numerical Analysis and NumberTheory We will focus on this trend of research for futurestudies

5 10 15 20

30

25

20

15

10

05

00

MatricesC

ompu

tatio

nal t

ime (

s)

Schulz

PMKSM

Figure 4The results of comparisons for Example 10 in terms of theelapsed time

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgment

The authors thank the referees for their valuable commentsand for the suggestions to improve the readability of thepaper

References

[1] R B Sidje and Y Saad ldquoRational approximation to the Fermi-Dirac function with applications in density functional theoryrdquoNumerical Algorithms vol 56 no 3 pp 455ndash479 2011

[2] F Soleymani and P S Stanimirovic ldquoA higher order iterativemethod for computing the Drazin inverserdquoThe Scientific WorldJournal vol 2013 Article ID 708647 11 pages 2013

[3] G Schulz ldquoIterative Berechnung der Reziproken matrixrdquoZeitschrift fur angewandte Mathematik und Mechanik vol 13pp 57ndash59 1933

[4] L Lin J Lu R Car and EWeinan ldquoMultipole representation ofthe Fermi operator with application to the electronic structureanalysis of metallic systemsrdquo Physical Review BmdashCondensedMatter and Materials Physics vol 79 no 11 Article ID 1151332009

8 Journal of Applied Mathematics

[5] W Li and Z Li ldquoA family of iterative methods for computingthe approximate inverse of a square matrix and inner inverse ofa non-square matrixrdquo Applied Mathematics and Computationvol 215 no 9 pp 3433ndash3442 2010

[6] H-B Li T-Z Huang Y Zhang X-P Liu and T-X GuldquoChebyshev-type methods and preconditioning techniquesrdquoApplied Mathematics and Computation vol 218 no 2 pp 260ndash270 2011

[7] W Hackbusch B N Khoromskij and E E TyrtyshnikovldquoApproximate iterations for structured matricesrdquo NumerischeMathematik vol 109 no 3 pp 365ndash383 2008

[8] T Soderstrom and GW Stewart ldquoOn the numerical propertiesof an iterative method for computing the Moore-Penrosegeneralized inverserdquo SIAM Journal on Numerical Analysis vol11 pp 61ndash74 1974

[9] X Liu and Y Qin ldquoSuccessive matrix squaring algorithm forcomputing the generalized inverse 119860

(2)

119879119878

rdquo Journal of AppliedMathematics vol 2012 Article ID 262034 12 pages 2012

[10] X Liu and S Huang ldquoProper splitting for the generalizedinverse119860(2)

119879119878

and its application on Banach spacesrdquoAbstract andApplied Analysis vol 2012 Article ID 736929 9 pages 2012

[11] X Liu andFHuang ldquoHigher-order convergent iterativemethodfor computing the generalized inverse over Banach spacesrdquoAbstract and Applied Analysis vol 2013 Article ID 356105 5pages 2013

[12] X Liu H Jin and Y Yu ldquoHigher-order convergent iterativemethod for computing the generalized inverse and its applica-tion to Toeplitz matricesrdquo Linear Algebra and Its Applicationsvol 439 no 6 pp 1635ndash1650 2013

[13] J Rajagopalan An iterative algortihm for inversion of matrices[MS thesis] Concordia University Quebec Canada 1996

[14] L Grosz ldquoPreconditioning by incomplete block eliminationrdquoNumerical Linear Algebra with Applications vol 7 no 7-8 pp527ndash541 2000

[15] G Codevico V Y Pan and M Van Barel ldquoNewton-likeiteration based on a cubic polynomial for structured matricesrdquoNumerical Algorithms vol 36 no 4 pp 365ndash380 2004

[16] V Pan and R Schreiber ldquoAn improved Newton iteration forthe generalized inverse of a matrix with applicationsrdquo SIAMJournal on Scientific and Statistical Computing vol 12 no 5 pp1109ndash1130 1991

[17] A Ben-Israel and T N E Greville Generalized InversesSpringer 2nd edition 2003

[18] F Khaksar Haghani and F Soleymani ldquoA new high-order stablenumerical method for matrix inversionrdquo The Scientific WorldJournal vol 2014 Article ID 830564 10 pages 2014

[19] F Soleymani S Karimi Vanani H I Siyyam and I AAl-Subaihi ldquoNumerical solution of nonlinear equations byan optimal eighth-order class of iterative methodsrdquo AnnalidellrsquoUniversita di Ferrara Sezione VII ScienzeMatematiche vol59 no 1 pp 159ndash171 2013

[20] X Wang and T Zhang ldquoHigh-order Newton-type iterativemethods with memory for solving nonlinear equationsrdquoMath-ematical Communications vol 19 pp 91ndash109 2014

[21] H Hotelling ldquoAnalysis of a complex of statistical variables intoprincipal componentsrdquo Journal of Educational Psychology vol24 no 6 pp 417ndash441 1933

[22] E Isaacson and H B Keller Analysis of Numerical MethodsJohn Wiley amp Sons New York NY USA 1966

[23] E V Krishnamurthy and S K Sen Numerical AlgorithmsmdashComputations in Science and Engineering Affiliated East-WestPress New Delhi India 2007

[24] L Weiguo L Juan and Q Tiantian ldquoA family of iterativemethods for computing Moore-Penrose inverse of a matrixrdquoLinear Algebra and Its Applications vol 438 no 1 pp 47ndash562013

[25] F Soleymani ldquoA fast convergent iterative solver for approximateinverse of matricesrdquo Numerical Linear Algebra with Applica-tions 2013

[26] M Zaka Ullah F Soleymani and A S Al-Fhaid ldquoAn effi-cient matrix iteration for computing weighted Moore-Penroseinverserdquo Applied Mathematics and Computation vol 226 pp441ndash454 2014

[27] VY Pan StructuredMatrices andPolynomials Unified SuperfastAlgorithms Springer Birkhaauser Boston Mass USA 2001

[28] M Miladinovic S Miljkovic and P Stanimirovic ldquoModifiedSMSmethod for computing outer inverses of ToeplitzmatricesrdquoApplied Mathematics and Computation vol 218 no 7 pp 3131ndash3143 2011

[29] H Chen and Y Wang ldquoA family of higher-order convergentiterative methods for computing the Moore-Penrose inverserdquoAppliedMathematics and Computation vol 218 no 8 pp 4012ndash4016 2011

[30] A R Soheili F Soleymani and M D Petkovic ldquoOn the com-putation of weightedMoore-Penrose inverse using a high-ordermatrix methodrdquo Computers amp Mathematics with Applicationsvol 66 no 11 pp 2344ndash2351 2013

[31] F Soleymani and P S Stanimirovic ldquoA note on the stability ofa 119901th order iteration for finding generalized inversesrdquo AppliedMathematics Letters vol 28 pp 77ndash81 2014

[32] P S Stanimirovic and F Soleymani ldquoA class of numerical algo-rithms for computing outer inversesrdquo Journal of Computationaland Applied Mathematics vol 263 pp 236ndash245 2014

[33] S Wolfram The Mathematica Book Wolfram Media 5th edi-tion 2003

[34] M P Knapp and C Xenophontos ldquoNumerical analysis meetsnumber theory using rootfindingmethods to calculate inversesmod 119901119899rdquo Applicable Analysis and Discrete Mathematics vol 4no 1 pp 23ndash31 2010

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 5: Research Article Approximating the Inverse of a Square ...downloads.hindawi.com/journals/jam/2014/731562.pdf · our proposed scheme for generalized outer inverses, that is, (2), ,

Journal of Applied Mathematics 5

Table 1 Results of comparisons for Example 7

Methods(1) (3) (4) (9)

Convergence rate 2 3 3 7Iteration number 3 2 2 1Residual norm 832717 times 10

minus7

121303 times 10minus7

510014 times 10minus8

97105 times 10minus8

No of nonzero ele 591107 720849 800689 762847Time (in seconds) 422 405 932 418

Id = SparseArray[i i -gt 1 n n]

V = DiagonalMatrixSparseArray[1Normal[Diagonal[A]]]

Do[V1 = SparseArray[V]

V2 = Chop[AV1] V3 = 3 Id + V2(minus3 Id + V2)

V4 = SparseArray[V2V3]

V = Chop[minus(14) V1V3SparseArray[minus13 Id

+ V4(15 Id + V4(minus7 Id + V4))]]

Print[V]L[i]= N[Norm[Id minus VA 1]]

Print[The residual norm is

Column[i Frame -gt All FrameStyle -gt Directive[Blue]]

Column[L[i] Frame -gt All FrameStyle -gt Directive[Blue]]]

i 1] AbsoluteTiming

Algorithm 2

n = 10000

A = SparseArray[Band[minus700 minus200] -gt 1 Band[1 1] -gt minus15

Band[1 minus400] -gt 9 Band[1 minus400] -gt minus1

Band[2000 200] -gt 1 n n 0]

Algorithm 3

1

2000

4000

6000

8000

10000

1

2000

4000

6000

8000

10000

1 2000 4000 6000 8000 10000

1 2000 4000 6000 8000 10000

(a)

1

2000

4000

6000

8000

10000

1

2000

4000

6000

8000

10000

1 2000 4000 6000 8000 10000

1 2000 4000 6000 8000 10000

(b)

Figure 2 The matrix plot (a) and the inverse (b) of the large sparse 10000 times 10000matrix 119860 in Example 8

6 Journal of Applied Mathematics

Table 2 Results of comparisons for Example 8

Methods(1) (3) (4) (9)

Convergence rate 2 3 3 7Iteration number 10 7 6 3Residual norm 129011 times 10

minus10

163299 times 10minus9

118619 times 10minus11

768192 times 10minus10

No of nonzero ele 41635 42340 41635 41635Time (in seconds) 223 215 226 195

Table 3 Results of elapsed computational time for solving Example 9

Methods GMRES PGMRES-(1) PGMRES-(3) PGMRES-(4) PGMRES-(9)The total time 029 014 014 009 007

For numerical comparisons in this section we have usedthe methods (1) (3) (4) and (9) We implement the abovealgorithms on the following examples using the built-inprecision in Mathematica 8

Example 7 Let us consider the large complex sparse matrixof the size 30000 times 30000 with 79512 nonzero elementspossessing a sparse inverse as in Algorithm 1 (119868 = radicminus1)where the matrix plot of 119860 showing its structure has beendrawn in Figure 1(a)

Methods like (9) are powerful in finding an approximateinverse or in finding robust approximate inverse precondi-tioners in low number of steps in which the output form ofthe approximate inverse is also sparse

In this test problem the stopping criterion is 119868 minus 119881119899

1198601

le

10minus7 Table 1 shows the number of iterations for different

methods As the programs were running we found therunning time using the command Absolute Timing[] toreport the elapsedCPU time (in seconds) for this experimentNote that the initial guess has been constructed using 119881

0

=

diag(111988611

111988622

1119886119898119898

) in Example 7 The plot of theapproximate inverse obtained by applying (9) has beenportrayed in Figure 1(b) The reason which made (4) theworsemethod is in the fact of computingmatrix power whichis time consuming for sparse matrices In the tables of thispaper ldquoNo of nonzero elerdquo stands for the number of nonzeroelements

In Algorithm 2 we provide the written Mathematica 8code of the new scheme (9) in this example to clearly revealsthe simplicity of the new scheme in finding approximateinverses using a threshold Chop[exp 10minus10]

Example 8 Let us consider a large sparse 10000 times 10000

matrix (with 18601 nonzero elements) as shown inAlgorithm 3

Table 2 shows the number of iterations for differentmeth-ods in order to reveal the efficiency of the proposed iterationwith a special attention to the elapsed timeThematrix plot of119860 in this case showing its structure alongside its approximateinverse has been drawn in Figure 2 respectively Note that

in Example 8 we have used an initial value constructed by1198810

= 119860lowast

(1198601

119860infin

) The stopping criterion is same as theprevious example

The new method (9) is totally better in terms of thenumber of iterations and the computational time In thispaper the computer specifications are Microsoft WindowsXP Intel(R) Pentium(R) 4 CPU 320GHz and 4GBof RAM

Matrices used up to now are large and sparse enoughto clarify the applicability of the new scheme Anyhow forsome classes of problems there are collections of matricesthat are used when proving new iterative linear systemsolvers or new preconditioners such as Matrix MarkethttpmathnistgovMatrixMarket In the following test wecompare the computational time of solving a practical prob-lem using the above source

Example 9 Consider the linear system of 119860119909 = 119887 in which119860 is defined by A = Example Data [ldquoMatrixrdquo ldquoBaipde900rdquo]and the right hand side vector is b = Table [1 i 1 900] InTable 3 we compare the consuming time (in seconds) of solv-ing this system by GMRES solver and its left preconditionedsystem 119881

1

119860119909 = 1198811

119887 In Table 3 for example PGMRES-(9) shows that the linear system 119881

1

119860119909 = 1198811

119887 in which 1198811

has been calculated by (9) is solved by GMRES solver Thetolerance is the residual norm to be less than 10minus14 in this testThe results support fully the efficiency of the new scheme (9)in constructing robust preconditioners

Example 10 This experiment evaluates the applicability of thenewmethod for findingMoore-Penrose inverse of 20 randomsparse matrices (possessing sparse pseudo-inverses)119898 times 119896 =

1200 times 1500 as shown in Algorithm 4

Note that the identity matrix should then be an119898 times 119898 matrix and the approximate pseudoinverseswould be of the size 119896 times 119898 In this example the initialapproximate Moore-Penrose inverse is constructedby V0

= ConjugateTranspose [A[j]] lowast (1((Singular

ValueList[A[j] 1] [[1]])2)) for each random test matrixAnd the stopping criterion is ||Vn+1 minus Vn||1 le 10

minus8 We onlycompared methods (1) denoted by ldquoSchulzrdquo (3) denoted byldquoKMSrdquo and the new iterative scheme (9) denoted by ldquoPMrdquo

Journal of Applied Mathematics 7

m = 1200 k = 1500 number = 20

Table[A[l] = SparseArray[Band[400 1 m k] -gt Random[] minus I

Band[1 200 m k] -gt 11 -Random[]

Band[minus95 100] -gt minus002 Band[minus100 500] -gt 01

m k 0] l number]

Algorithm 4

5 10 15 20

50

40

30

20

10

0

Matrices

Num

ber o

f ite

ratio

ns

Schulz

PMKSM

Figure 3The results of comparisons for Example 10 in terms of thenumber of iterations

The results for the number of iterations and the runningtime are compared in Figures 3-4 They show a clear advan-tage of the new scheme in finding theMoore-Penrose inverse

5 Conclusions

In this paper we have developed an iterative method ininverse finding of matrices which has the capability to beapplied on real or complex matrices with preserving thesparsity feature in matrix-by-matrix multiplications using athreshold This allows us to interestingly reduce the com-putational time and usage memory in applying the newiterative method We have shown that the suggested method(9) reaches ninth order of convergence The extension ofthe scheme for pseudoinverse has also been discussed in thepaper The applicability of the new scheme was illustratednumerically in Section 4

Finally according to the numerical results obtained andthe concrete application of such solvers we can conclude thatthe new method is efficient

The new scheme (9) has been tested for matrix inversionHowever we believe that such iterations could also be appliedfor finding the inverse of an integer in modulo 119901

119899 [34]Following this idea will for sure open a new topic of researchwhich is a combination of Numerical Analysis and NumberTheory We will focus on this trend of research for futurestudies

5 10 15 20

30

25

20

15

10

05

00

MatricesC

ompu

tatio

nal t

ime (

s)

Schulz

PMKSM

Figure 4The results of comparisons for Example 10 in terms of theelapsed time

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgment

The authors thank the referees for their valuable commentsand for the suggestions to improve the readability of thepaper

References

[1] R B Sidje and Y Saad ldquoRational approximation to the Fermi-Dirac function with applications in density functional theoryrdquoNumerical Algorithms vol 56 no 3 pp 455ndash479 2011

[2] F Soleymani and P S Stanimirovic ldquoA higher order iterativemethod for computing the Drazin inverserdquoThe Scientific WorldJournal vol 2013 Article ID 708647 11 pages 2013

[3] G Schulz ldquoIterative Berechnung der Reziproken matrixrdquoZeitschrift fur angewandte Mathematik und Mechanik vol 13pp 57ndash59 1933

[4] L Lin J Lu R Car and EWeinan ldquoMultipole representation ofthe Fermi operator with application to the electronic structureanalysis of metallic systemsrdquo Physical Review BmdashCondensedMatter and Materials Physics vol 79 no 11 Article ID 1151332009

8 Journal of Applied Mathematics

[5] W Li and Z Li ldquoA family of iterative methods for computingthe approximate inverse of a square matrix and inner inverse ofa non-square matrixrdquo Applied Mathematics and Computationvol 215 no 9 pp 3433ndash3442 2010

[6] H-B Li T-Z Huang Y Zhang X-P Liu and T-X GuldquoChebyshev-type methods and preconditioning techniquesrdquoApplied Mathematics and Computation vol 218 no 2 pp 260ndash270 2011

[7] W Hackbusch B N Khoromskij and E E TyrtyshnikovldquoApproximate iterations for structured matricesrdquo NumerischeMathematik vol 109 no 3 pp 365ndash383 2008

[8] T Soderstrom and GW Stewart ldquoOn the numerical propertiesof an iterative method for computing the Moore-Penrosegeneralized inverserdquo SIAM Journal on Numerical Analysis vol11 pp 61ndash74 1974

[9] X Liu and Y Qin ldquoSuccessive matrix squaring algorithm forcomputing the generalized inverse 119860

(2)

119879119878

rdquo Journal of AppliedMathematics vol 2012 Article ID 262034 12 pages 2012

[10] X Liu and S Huang ldquoProper splitting for the generalizedinverse119860(2)

119879119878

and its application on Banach spacesrdquoAbstract andApplied Analysis vol 2012 Article ID 736929 9 pages 2012

[11] X Liu andFHuang ldquoHigher-order convergent iterativemethodfor computing the generalized inverse over Banach spacesrdquoAbstract and Applied Analysis vol 2013 Article ID 356105 5pages 2013

[12] X Liu H Jin and Y Yu ldquoHigher-order convergent iterativemethod for computing the generalized inverse and its applica-tion to Toeplitz matricesrdquo Linear Algebra and Its Applicationsvol 439 no 6 pp 1635ndash1650 2013

[13] J Rajagopalan An iterative algortihm for inversion of matrices[MS thesis] Concordia University Quebec Canada 1996

[14] L Grosz ldquoPreconditioning by incomplete block eliminationrdquoNumerical Linear Algebra with Applications vol 7 no 7-8 pp527ndash541 2000

[15] G Codevico V Y Pan and M Van Barel ldquoNewton-likeiteration based on a cubic polynomial for structured matricesrdquoNumerical Algorithms vol 36 no 4 pp 365ndash380 2004

[16] V Pan and R Schreiber ldquoAn improved Newton iteration forthe generalized inverse of a matrix with applicationsrdquo SIAMJournal on Scientific and Statistical Computing vol 12 no 5 pp1109ndash1130 1991

[17] A Ben-Israel and T N E Greville Generalized InversesSpringer 2nd edition 2003

[18] F Khaksar Haghani and F Soleymani ldquoA new high-order stablenumerical method for matrix inversionrdquo The Scientific WorldJournal vol 2014 Article ID 830564 10 pages 2014

[19] F Soleymani S Karimi Vanani H I Siyyam and I AAl-Subaihi ldquoNumerical solution of nonlinear equations byan optimal eighth-order class of iterative methodsrdquo AnnalidellrsquoUniversita di Ferrara Sezione VII ScienzeMatematiche vol59 no 1 pp 159ndash171 2013

[20] X Wang and T Zhang ldquoHigh-order Newton-type iterativemethods with memory for solving nonlinear equationsrdquoMath-ematical Communications vol 19 pp 91ndash109 2014

[21] H Hotelling ldquoAnalysis of a complex of statistical variables intoprincipal componentsrdquo Journal of Educational Psychology vol24 no 6 pp 417ndash441 1933

[22] E Isaacson and H B Keller Analysis of Numerical MethodsJohn Wiley amp Sons New York NY USA 1966

[23] E V Krishnamurthy and S K Sen Numerical AlgorithmsmdashComputations in Science and Engineering Affiliated East-WestPress New Delhi India 2007

[24] L Weiguo L Juan and Q Tiantian ldquoA family of iterativemethods for computing Moore-Penrose inverse of a matrixrdquoLinear Algebra and Its Applications vol 438 no 1 pp 47ndash562013

[25] F Soleymani ldquoA fast convergent iterative solver for approximateinverse of matricesrdquo Numerical Linear Algebra with Applica-tions 2013

[26] M Zaka Ullah F Soleymani and A S Al-Fhaid ldquoAn effi-cient matrix iteration for computing weighted Moore-Penroseinverserdquo Applied Mathematics and Computation vol 226 pp441ndash454 2014

[27] VY Pan StructuredMatrices andPolynomials Unified SuperfastAlgorithms Springer Birkhaauser Boston Mass USA 2001

[28] M Miladinovic S Miljkovic and P Stanimirovic ldquoModifiedSMSmethod for computing outer inverses of ToeplitzmatricesrdquoApplied Mathematics and Computation vol 218 no 7 pp 3131ndash3143 2011

[29] H Chen and Y Wang ldquoA family of higher-order convergentiterative methods for computing the Moore-Penrose inverserdquoAppliedMathematics and Computation vol 218 no 8 pp 4012ndash4016 2011

[30] A R Soheili F Soleymani and M D Petkovic ldquoOn the com-putation of weightedMoore-Penrose inverse using a high-ordermatrix methodrdquo Computers amp Mathematics with Applicationsvol 66 no 11 pp 2344ndash2351 2013

[31] F Soleymani and P S Stanimirovic ldquoA note on the stability ofa 119901th order iteration for finding generalized inversesrdquo AppliedMathematics Letters vol 28 pp 77ndash81 2014

[32] P S Stanimirovic and F Soleymani ldquoA class of numerical algo-rithms for computing outer inversesrdquo Journal of Computationaland Applied Mathematics vol 263 pp 236ndash245 2014

[33] S Wolfram The Mathematica Book Wolfram Media 5th edi-tion 2003

[34] M P Knapp and C Xenophontos ldquoNumerical analysis meetsnumber theory using rootfindingmethods to calculate inversesmod 119901119899rdquo Applicable Analysis and Discrete Mathematics vol 4no 1 pp 23ndash31 2010

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 6: Research Article Approximating the Inverse of a Square ...downloads.hindawi.com/journals/jam/2014/731562.pdf · our proposed scheme for generalized outer inverses, that is, (2), ,

6 Journal of Applied Mathematics

Table 2 Results of comparisons for Example 8

Methods(1) (3) (4) (9)

Convergence rate 2 3 3 7Iteration number 10 7 6 3Residual norm 129011 times 10

minus10

163299 times 10minus9

118619 times 10minus11

768192 times 10minus10

No of nonzero ele 41635 42340 41635 41635Time (in seconds) 223 215 226 195

Table 3 Results of elapsed computational time for solving Example 9

Methods GMRES PGMRES-(1) PGMRES-(3) PGMRES-(4) PGMRES-(9)The total time 029 014 014 009 007

For numerical comparisons in this section we have usedthe methods (1) (3) (4) and (9) We implement the abovealgorithms on the following examples using the built-inprecision in Mathematica 8

Example 7 Let us consider the large complex sparse matrixof the size 30000 times 30000 with 79512 nonzero elementspossessing a sparse inverse as in Algorithm 1 (119868 = radicminus1)where the matrix plot of 119860 showing its structure has beendrawn in Figure 1(a)

Methods like (9) are powerful in finding an approximateinverse or in finding robust approximate inverse precondi-tioners in low number of steps in which the output form ofthe approximate inverse is also sparse

In this test problem the stopping criterion is 119868 minus 119881119899

1198601

le

10minus7 Table 1 shows the number of iterations for different

methods As the programs were running we found therunning time using the command Absolute Timing[] toreport the elapsedCPU time (in seconds) for this experimentNote that the initial guess has been constructed using 119881

0

=

diag(111988611

111988622

1119886119898119898

) in Example 7 The plot of theapproximate inverse obtained by applying (9) has beenportrayed in Figure 1(b) The reason which made (4) theworsemethod is in the fact of computingmatrix power whichis time consuming for sparse matrices In the tables of thispaper ldquoNo of nonzero elerdquo stands for the number of nonzeroelements

In Algorithm 2 we provide the written Mathematica 8code of the new scheme (9) in this example to clearly revealsthe simplicity of the new scheme in finding approximateinverses using a threshold Chop[exp 10minus10]

Example 8 Let us consider a large sparse 10000 times 10000

matrix (with 18601 nonzero elements) as shown inAlgorithm 3

Table 2 shows the number of iterations for differentmeth-ods in order to reveal the efficiency of the proposed iterationwith a special attention to the elapsed timeThematrix plot of119860 in this case showing its structure alongside its approximateinverse has been drawn in Figure 2 respectively Note that

in Example 8 we have used an initial value constructed by1198810

= 119860lowast

(1198601

119860infin

) The stopping criterion is same as theprevious example

The new method (9) is totally better in terms of thenumber of iterations and the computational time In thispaper the computer specifications are Microsoft WindowsXP Intel(R) Pentium(R) 4 CPU 320GHz and 4GBof RAM

Matrices used up to now are large and sparse enoughto clarify the applicability of the new scheme Anyhow forsome classes of problems there are collections of matricesthat are used when proving new iterative linear systemsolvers or new preconditioners such as Matrix MarkethttpmathnistgovMatrixMarket In the following test wecompare the computational time of solving a practical prob-lem using the above source

Example 9 Consider the linear system of 119860119909 = 119887 in which119860 is defined by A = Example Data [ldquoMatrixrdquo ldquoBaipde900rdquo]and the right hand side vector is b = Table [1 i 1 900] InTable 3 we compare the consuming time (in seconds) of solv-ing this system by GMRES solver and its left preconditionedsystem 119881

1

119860119909 = 1198811

119887 In Table 3 for example PGMRES-(9) shows that the linear system 119881

1

119860119909 = 1198811

119887 in which 1198811

has been calculated by (9) is solved by GMRES solver Thetolerance is the residual norm to be less than 10minus14 in this testThe results support fully the efficiency of the new scheme (9)in constructing robust preconditioners

Example 10 This experiment evaluates the applicability of thenewmethod for findingMoore-Penrose inverse of 20 randomsparse matrices (possessing sparse pseudo-inverses)119898 times 119896 =

1200 times 1500 as shown in Algorithm 4

Note that the identity matrix should then be an119898 times 119898 matrix and the approximate pseudoinverseswould be of the size 119896 times 119898 In this example the initialapproximate Moore-Penrose inverse is constructedby V0

= ConjugateTranspose [A[j]] lowast (1((Singular

ValueList[A[j] 1] [[1]])2)) for each random test matrixAnd the stopping criterion is ||Vn+1 minus Vn||1 le 10

minus8 We onlycompared methods (1) denoted by ldquoSchulzrdquo (3) denoted byldquoKMSrdquo and the new iterative scheme (9) denoted by ldquoPMrdquo

Journal of Applied Mathematics 7

m = 1200 k = 1500 number = 20

Table[A[l] = SparseArray[Band[400 1 m k] -gt Random[] minus I

Band[1 200 m k] -gt 11 -Random[]

Band[minus95 100] -gt minus002 Band[minus100 500] -gt 01

m k 0] l number]

Algorithm 4

5 10 15 20

50

40

30

20

10

0

Matrices

Num

ber o

f ite

ratio

ns

Schulz

PMKSM

Figure 3The results of comparisons for Example 10 in terms of thenumber of iterations

The results for the number of iterations and the runningtime are compared in Figures 3-4 They show a clear advan-tage of the new scheme in finding theMoore-Penrose inverse

5 Conclusions

In this paper we have developed an iterative method ininverse finding of matrices which has the capability to beapplied on real or complex matrices with preserving thesparsity feature in matrix-by-matrix multiplications using athreshold This allows us to interestingly reduce the com-putational time and usage memory in applying the newiterative method We have shown that the suggested method(9) reaches ninth order of convergence The extension ofthe scheme for pseudoinverse has also been discussed in thepaper The applicability of the new scheme was illustratednumerically in Section 4

Finally according to the numerical results obtained andthe concrete application of such solvers we can conclude thatthe new method is efficient

The new scheme (9) has been tested for matrix inversionHowever we believe that such iterations could also be appliedfor finding the inverse of an integer in modulo 119901

119899 [34]Following this idea will for sure open a new topic of researchwhich is a combination of Numerical Analysis and NumberTheory We will focus on this trend of research for futurestudies

5 10 15 20

30

25

20

15

10

05

00

MatricesC

ompu

tatio

nal t

ime (

s)

Schulz

PMKSM

Figure 4The results of comparisons for Example 10 in terms of theelapsed time

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgment

The authors thank the referees for their valuable commentsand for the suggestions to improve the readability of thepaper

References

[1] R B Sidje and Y Saad ldquoRational approximation to the Fermi-Dirac function with applications in density functional theoryrdquoNumerical Algorithms vol 56 no 3 pp 455ndash479 2011

[2] F Soleymani and P S Stanimirovic ldquoA higher order iterativemethod for computing the Drazin inverserdquoThe Scientific WorldJournal vol 2013 Article ID 708647 11 pages 2013

[3] G Schulz ldquoIterative Berechnung der Reziproken matrixrdquoZeitschrift fur angewandte Mathematik und Mechanik vol 13pp 57ndash59 1933

[4] L Lin J Lu R Car and EWeinan ldquoMultipole representation ofthe Fermi operator with application to the electronic structureanalysis of metallic systemsrdquo Physical Review BmdashCondensedMatter and Materials Physics vol 79 no 11 Article ID 1151332009

8 Journal of Applied Mathematics

[5] W Li and Z Li ldquoA family of iterative methods for computingthe approximate inverse of a square matrix and inner inverse ofa non-square matrixrdquo Applied Mathematics and Computationvol 215 no 9 pp 3433ndash3442 2010

[6] H-B Li T-Z Huang Y Zhang X-P Liu and T-X GuldquoChebyshev-type methods and preconditioning techniquesrdquoApplied Mathematics and Computation vol 218 no 2 pp 260ndash270 2011

[7] W Hackbusch B N Khoromskij and E E TyrtyshnikovldquoApproximate iterations for structured matricesrdquo NumerischeMathematik vol 109 no 3 pp 365ndash383 2008

[8] T Soderstrom and GW Stewart ldquoOn the numerical propertiesof an iterative method for computing the Moore-Penrosegeneralized inverserdquo SIAM Journal on Numerical Analysis vol11 pp 61ndash74 1974

[9] X Liu and Y Qin ldquoSuccessive matrix squaring algorithm forcomputing the generalized inverse 119860

(2)

119879119878

rdquo Journal of AppliedMathematics vol 2012 Article ID 262034 12 pages 2012

[10] X Liu and S Huang ldquoProper splitting for the generalizedinverse119860(2)

119879119878

and its application on Banach spacesrdquoAbstract andApplied Analysis vol 2012 Article ID 736929 9 pages 2012

[11] X Liu andFHuang ldquoHigher-order convergent iterativemethodfor computing the generalized inverse over Banach spacesrdquoAbstract and Applied Analysis vol 2013 Article ID 356105 5pages 2013

[12] X Liu H Jin and Y Yu ldquoHigher-order convergent iterativemethod for computing the generalized inverse and its applica-tion to Toeplitz matricesrdquo Linear Algebra and Its Applicationsvol 439 no 6 pp 1635ndash1650 2013

[13] J Rajagopalan An iterative algortihm for inversion of matrices[MS thesis] Concordia University Quebec Canada 1996

[14] L Grosz ldquoPreconditioning by incomplete block eliminationrdquoNumerical Linear Algebra with Applications vol 7 no 7-8 pp527ndash541 2000

[15] G Codevico V Y Pan and M Van Barel ldquoNewton-likeiteration based on a cubic polynomial for structured matricesrdquoNumerical Algorithms vol 36 no 4 pp 365ndash380 2004

[16] V Pan and R Schreiber ldquoAn improved Newton iteration forthe generalized inverse of a matrix with applicationsrdquo SIAMJournal on Scientific and Statistical Computing vol 12 no 5 pp1109ndash1130 1991

[17] A Ben-Israel and T N E Greville Generalized InversesSpringer 2nd edition 2003

[18] F Khaksar Haghani and F Soleymani ldquoA new high-order stablenumerical method for matrix inversionrdquo The Scientific WorldJournal vol 2014 Article ID 830564 10 pages 2014

[19] F Soleymani S Karimi Vanani H I Siyyam and I AAl-Subaihi ldquoNumerical solution of nonlinear equations byan optimal eighth-order class of iterative methodsrdquo AnnalidellrsquoUniversita di Ferrara Sezione VII ScienzeMatematiche vol59 no 1 pp 159ndash171 2013

[20] X Wang and T Zhang ldquoHigh-order Newton-type iterativemethods with memory for solving nonlinear equationsrdquoMath-ematical Communications vol 19 pp 91ndash109 2014

[21] H Hotelling ldquoAnalysis of a complex of statistical variables intoprincipal componentsrdquo Journal of Educational Psychology vol24 no 6 pp 417ndash441 1933

[22] E Isaacson and H B Keller Analysis of Numerical MethodsJohn Wiley amp Sons New York NY USA 1966

[23] E V Krishnamurthy and S K Sen Numerical AlgorithmsmdashComputations in Science and Engineering Affiliated East-WestPress New Delhi India 2007

[24] L Weiguo L Juan and Q Tiantian ldquoA family of iterativemethods for computing Moore-Penrose inverse of a matrixrdquoLinear Algebra and Its Applications vol 438 no 1 pp 47ndash562013

[25] F Soleymani ldquoA fast convergent iterative solver for approximateinverse of matricesrdquo Numerical Linear Algebra with Applica-tions 2013

[26] M Zaka Ullah F Soleymani and A S Al-Fhaid ldquoAn effi-cient matrix iteration for computing weighted Moore-Penroseinverserdquo Applied Mathematics and Computation vol 226 pp441ndash454 2014

[27] VY Pan StructuredMatrices andPolynomials Unified SuperfastAlgorithms Springer Birkhaauser Boston Mass USA 2001

[28] M Miladinovic S Miljkovic and P Stanimirovic ldquoModifiedSMSmethod for computing outer inverses of ToeplitzmatricesrdquoApplied Mathematics and Computation vol 218 no 7 pp 3131ndash3143 2011

[29] H Chen and Y Wang ldquoA family of higher-order convergentiterative methods for computing the Moore-Penrose inverserdquoAppliedMathematics and Computation vol 218 no 8 pp 4012ndash4016 2011

[30] A R Soheili F Soleymani and M D Petkovic ldquoOn the com-putation of weightedMoore-Penrose inverse using a high-ordermatrix methodrdquo Computers amp Mathematics with Applicationsvol 66 no 11 pp 2344ndash2351 2013

[31] F Soleymani and P S Stanimirovic ldquoA note on the stability ofa 119901th order iteration for finding generalized inversesrdquo AppliedMathematics Letters vol 28 pp 77ndash81 2014

[32] P S Stanimirovic and F Soleymani ldquoA class of numerical algo-rithms for computing outer inversesrdquo Journal of Computationaland Applied Mathematics vol 263 pp 236ndash245 2014

[33] S Wolfram The Mathematica Book Wolfram Media 5th edi-tion 2003

[34] M P Knapp and C Xenophontos ldquoNumerical analysis meetsnumber theory using rootfindingmethods to calculate inversesmod 119901119899rdquo Applicable Analysis and Discrete Mathematics vol 4no 1 pp 23ndash31 2010

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 7: Research Article Approximating the Inverse of a Square ...downloads.hindawi.com/journals/jam/2014/731562.pdf · our proposed scheme for generalized outer inverses, that is, (2), ,

Journal of Applied Mathematics 7

m = 1200 k = 1500 number = 20

Table[A[l] = SparseArray[Band[400 1 m k] -gt Random[] minus I

Band[1 200 m k] -gt 11 -Random[]

Band[minus95 100] -gt minus002 Band[minus100 500] -gt 01

m k 0] l number]

Algorithm 4

5 10 15 20

50

40

30

20

10

0

Matrices

Num

ber o

f ite

ratio

ns

Schulz

PMKSM

Figure 3The results of comparisons for Example 10 in terms of thenumber of iterations

The results for the number of iterations and the runningtime are compared in Figures 3-4 They show a clear advan-tage of the new scheme in finding theMoore-Penrose inverse

5 Conclusions

In this paper we have developed an iterative method ininverse finding of matrices which has the capability to beapplied on real or complex matrices with preserving thesparsity feature in matrix-by-matrix multiplications using athreshold This allows us to interestingly reduce the com-putational time and usage memory in applying the newiterative method We have shown that the suggested method(9) reaches ninth order of convergence The extension ofthe scheme for pseudoinverse has also been discussed in thepaper The applicability of the new scheme was illustratednumerically in Section 4

Finally according to the numerical results obtained andthe concrete application of such solvers we can conclude thatthe new method is efficient

The new scheme (9) has been tested for matrix inversionHowever we believe that such iterations could also be appliedfor finding the inverse of an integer in modulo 119901

119899 [34]Following this idea will for sure open a new topic of researchwhich is a combination of Numerical Analysis and NumberTheory We will focus on this trend of research for futurestudies

5 10 15 20

30

25

20

15

10

05

00

MatricesC

ompu

tatio

nal t

ime (

s)

Schulz

PMKSM

Figure 4The results of comparisons for Example 10 in terms of theelapsed time

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgment

The authors thank the referees for their valuable commentsand for the suggestions to improve the readability of thepaper

References

[1] R B Sidje and Y Saad ldquoRational approximation to the Fermi-Dirac function with applications in density functional theoryrdquoNumerical Algorithms vol 56 no 3 pp 455ndash479 2011

[2] F Soleymani and P S Stanimirovic ldquoA higher order iterativemethod for computing the Drazin inverserdquoThe Scientific WorldJournal vol 2013 Article ID 708647 11 pages 2013

[3] G Schulz ldquoIterative Berechnung der Reziproken matrixrdquoZeitschrift fur angewandte Mathematik und Mechanik vol 13pp 57ndash59 1933

[4] L Lin J Lu R Car and EWeinan ldquoMultipole representation ofthe Fermi operator with application to the electronic structureanalysis of metallic systemsrdquo Physical Review BmdashCondensedMatter and Materials Physics vol 79 no 11 Article ID 1151332009

8 Journal of Applied Mathematics

[5] W Li and Z Li ldquoA family of iterative methods for computingthe approximate inverse of a square matrix and inner inverse ofa non-square matrixrdquo Applied Mathematics and Computationvol 215 no 9 pp 3433ndash3442 2010

[6] H-B Li T-Z Huang Y Zhang X-P Liu and T-X GuldquoChebyshev-type methods and preconditioning techniquesrdquoApplied Mathematics and Computation vol 218 no 2 pp 260ndash270 2011

[7] W Hackbusch B N Khoromskij and E E TyrtyshnikovldquoApproximate iterations for structured matricesrdquo NumerischeMathematik vol 109 no 3 pp 365ndash383 2008

[8] T Soderstrom and GW Stewart ldquoOn the numerical propertiesof an iterative method for computing the Moore-Penrosegeneralized inverserdquo SIAM Journal on Numerical Analysis vol11 pp 61ndash74 1974

[9] X Liu and Y Qin ldquoSuccessive matrix squaring algorithm forcomputing the generalized inverse 119860

(2)

119879119878

rdquo Journal of AppliedMathematics vol 2012 Article ID 262034 12 pages 2012

[10] X Liu and S Huang ldquoProper splitting for the generalizedinverse119860(2)

119879119878

and its application on Banach spacesrdquoAbstract andApplied Analysis vol 2012 Article ID 736929 9 pages 2012

[11] X Liu andFHuang ldquoHigher-order convergent iterativemethodfor computing the generalized inverse over Banach spacesrdquoAbstract and Applied Analysis vol 2013 Article ID 356105 5pages 2013

[12] X Liu H Jin and Y Yu ldquoHigher-order convergent iterativemethod for computing the generalized inverse and its applica-tion to Toeplitz matricesrdquo Linear Algebra and Its Applicationsvol 439 no 6 pp 1635ndash1650 2013

[13] J Rajagopalan An iterative algortihm for inversion of matrices[MS thesis] Concordia University Quebec Canada 1996

[14] L Grosz ldquoPreconditioning by incomplete block eliminationrdquoNumerical Linear Algebra with Applications vol 7 no 7-8 pp527ndash541 2000

[15] G Codevico V Y Pan and M Van Barel ldquoNewton-likeiteration based on a cubic polynomial for structured matricesrdquoNumerical Algorithms vol 36 no 4 pp 365ndash380 2004

[16] V Pan and R Schreiber ldquoAn improved Newton iteration forthe generalized inverse of a matrix with applicationsrdquo SIAMJournal on Scientific and Statistical Computing vol 12 no 5 pp1109ndash1130 1991

[17] A Ben-Israel and T N E Greville Generalized InversesSpringer 2nd edition 2003

[18] F Khaksar Haghani and F Soleymani ldquoA new high-order stablenumerical method for matrix inversionrdquo The Scientific WorldJournal vol 2014 Article ID 830564 10 pages 2014

[19] F Soleymani S Karimi Vanani H I Siyyam and I AAl-Subaihi ldquoNumerical solution of nonlinear equations byan optimal eighth-order class of iterative methodsrdquo AnnalidellrsquoUniversita di Ferrara Sezione VII ScienzeMatematiche vol59 no 1 pp 159ndash171 2013

[20] X Wang and T Zhang ldquoHigh-order Newton-type iterativemethods with memory for solving nonlinear equationsrdquoMath-ematical Communications vol 19 pp 91ndash109 2014

[21] H Hotelling ldquoAnalysis of a complex of statistical variables intoprincipal componentsrdquo Journal of Educational Psychology vol24 no 6 pp 417ndash441 1933

[22] E Isaacson and H B Keller Analysis of Numerical MethodsJohn Wiley amp Sons New York NY USA 1966

[23] E V Krishnamurthy and S K Sen Numerical AlgorithmsmdashComputations in Science and Engineering Affiliated East-WestPress New Delhi India 2007

[24] L Weiguo L Juan and Q Tiantian ldquoA family of iterativemethods for computing Moore-Penrose inverse of a matrixrdquoLinear Algebra and Its Applications vol 438 no 1 pp 47ndash562013

[25] F Soleymani ldquoA fast convergent iterative solver for approximateinverse of matricesrdquo Numerical Linear Algebra with Applica-tions 2013

[26] M Zaka Ullah F Soleymani and A S Al-Fhaid ldquoAn effi-cient matrix iteration for computing weighted Moore-Penroseinverserdquo Applied Mathematics and Computation vol 226 pp441ndash454 2014

[27] VY Pan StructuredMatrices andPolynomials Unified SuperfastAlgorithms Springer Birkhaauser Boston Mass USA 2001

[28] M Miladinovic S Miljkovic and P Stanimirovic ldquoModifiedSMSmethod for computing outer inverses of ToeplitzmatricesrdquoApplied Mathematics and Computation vol 218 no 7 pp 3131ndash3143 2011

[29] H Chen and Y Wang ldquoA family of higher-order convergentiterative methods for computing the Moore-Penrose inverserdquoAppliedMathematics and Computation vol 218 no 8 pp 4012ndash4016 2011

[30] A R Soheili F Soleymani and M D Petkovic ldquoOn the com-putation of weightedMoore-Penrose inverse using a high-ordermatrix methodrdquo Computers amp Mathematics with Applicationsvol 66 no 11 pp 2344ndash2351 2013

[31] F Soleymani and P S Stanimirovic ldquoA note on the stability ofa 119901th order iteration for finding generalized inversesrdquo AppliedMathematics Letters vol 28 pp 77ndash81 2014

[32] P S Stanimirovic and F Soleymani ldquoA class of numerical algo-rithms for computing outer inversesrdquo Journal of Computationaland Applied Mathematics vol 263 pp 236ndash245 2014

[33] S Wolfram The Mathematica Book Wolfram Media 5th edi-tion 2003

[34] M P Knapp and C Xenophontos ldquoNumerical analysis meetsnumber theory using rootfindingmethods to calculate inversesmod 119901119899rdquo Applicable Analysis and Discrete Mathematics vol 4no 1 pp 23ndash31 2010

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 8: Research Article Approximating the Inverse of a Square ...downloads.hindawi.com/journals/jam/2014/731562.pdf · our proposed scheme for generalized outer inverses, that is, (2), ,

8 Journal of Applied Mathematics

[5] W Li and Z Li ldquoA family of iterative methods for computingthe approximate inverse of a square matrix and inner inverse ofa non-square matrixrdquo Applied Mathematics and Computationvol 215 no 9 pp 3433ndash3442 2010

[6] H-B Li T-Z Huang Y Zhang X-P Liu and T-X GuldquoChebyshev-type methods and preconditioning techniquesrdquoApplied Mathematics and Computation vol 218 no 2 pp 260ndash270 2011

[7] W Hackbusch B N Khoromskij and E E TyrtyshnikovldquoApproximate iterations for structured matricesrdquo NumerischeMathematik vol 109 no 3 pp 365ndash383 2008

[8] T Soderstrom and GW Stewart ldquoOn the numerical propertiesof an iterative method for computing the Moore-Penrosegeneralized inverserdquo SIAM Journal on Numerical Analysis vol11 pp 61ndash74 1974

[9] X Liu and Y Qin ldquoSuccessive matrix squaring algorithm forcomputing the generalized inverse 119860

(2)

119879119878

rdquo Journal of AppliedMathematics vol 2012 Article ID 262034 12 pages 2012

[10] X Liu and S Huang ldquoProper splitting for the generalizedinverse119860(2)

119879119878

and its application on Banach spacesrdquoAbstract andApplied Analysis vol 2012 Article ID 736929 9 pages 2012

[11] X Liu andFHuang ldquoHigher-order convergent iterativemethodfor computing the generalized inverse over Banach spacesrdquoAbstract and Applied Analysis vol 2013 Article ID 356105 5pages 2013

[12] X Liu H Jin and Y Yu ldquoHigher-order convergent iterativemethod for computing the generalized inverse and its applica-tion to Toeplitz matricesrdquo Linear Algebra and Its Applicationsvol 439 no 6 pp 1635ndash1650 2013

[13] J Rajagopalan An iterative algortihm for inversion of matrices[MS thesis] Concordia University Quebec Canada 1996

[14] L Grosz ldquoPreconditioning by incomplete block eliminationrdquoNumerical Linear Algebra with Applications vol 7 no 7-8 pp527ndash541 2000

[15] G Codevico V Y Pan and M Van Barel ldquoNewton-likeiteration based on a cubic polynomial for structured matricesrdquoNumerical Algorithms vol 36 no 4 pp 365ndash380 2004

[16] V Pan and R Schreiber ldquoAn improved Newton iteration forthe generalized inverse of a matrix with applicationsrdquo SIAMJournal on Scientific and Statistical Computing vol 12 no 5 pp1109ndash1130 1991

[17] A Ben-Israel and T N E Greville Generalized InversesSpringer 2nd edition 2003

[18] F Khaksar Haghani and F Soleymani ldquoA new high-order stablenumerical method for matrix inversionrdquo The Scientific WorldJournal vol 2014 Article ID 830564 10 pages 2014

[19] F Soleymani S Karimi Vanani H I Siyyam and I AAl-Subaihi ldquoNumerical solution of nonlinear equations byan optimal eighth-order class of iterative methodsrdquo AnnalidellrsquoUniversita di Ferrara Sezione VII ScienzeMatematiche vol59 no 1 pp 159ndash171 2013

[20] X Wang and T Zhang ldquoHigh-order Newton-type iterativemethods with memory for solving nonlinear equationsrdquoMath-ematical Communications vol 19 pp 91ndash109 2014

[21] H Hotelling ldquoAnalysis of a complex of statistical variables intoprincipal componentsrdquo Journal of Educational Psychology vol24 no 6 pp 417ndash441 1933

[22] E Isaacson and H B Keller Analysis of Numerical MethodsJohn Wiley amp Sons New York NY USA 1966

[23] E V Krishnamurthy and S K Sen Numerical AlgorithmsmdashComputations in Science and Engineering Affiliated East-WestPress New Delhi India 2007

[24] L Weiguo L Juan and Q Tiantian ldquoA family of iterativemethods for computing Moore-Penrose inverse of a matrixrdquoLinear Algebra and Its Applications vol 438 no 1 pp 47ndash562013

[25] F Soleymani ldquoA fast convergent iterative solver for approximateinverse of matricesrdquo Numerical Linear Algebra with Applica-tions 2013

[26] M Zaka Ullah F Soleymani and A S Al-Fhaid ldquoAn effi-cient matrix iteration for computing weighted Moore-Penroseinverserdquo Applied Mathematics and Computation vol 226 pp441ndash454 2014

[27] VY Pan StructuredMatrices andPolynomials Unified SuperfastAlgorithms Springer Birkhaauser Boston Mass USA 2001

[28] M Miladinovic S Miljkovic and P Stanimirovic ldquoModifiedSMSmethod for computing outer inverses of ToeplitzmatricesrdquoApplied Mathematics and Computation vol 218 no 7 pp 3131ndash3143 2011

[29] H Chen and Y Wang ldquoA family of higher-order convergentiterative methods for computing the Moore-Penrose inverserdquoAppliedMathematics and Computation vol 218 no 8 pp 4012ndash4016 2011

[30] A R Soheili F Soleymani and M D Petkovic ldquoOn the com-putation of weightedMoore-Penrose inverse using a high-ordermatrix methodrdquo Computers amp Mathematics with Applicationsvol 66 no 11 pp 2344ndash2351 2013

[31] F Soleymani and P S Stanimirovic ldquoA note on the stability ofa 119901th order iteration for finding generalized inversesrdquo AppliedMathematics Letters vol 28 pp 77ndash81 2014

[32] P S Stanimirovic and F Soleymani ldquoA class of numerical algo-rithms for computing outer inversesrdquo Journal of Computationaland Applied Mathematics vol 263 pp 236ndash245 2014

[33] S Wolfram The Mathematica Book Wolfram Media 5th edi-tion 2003

[34] M P Knapp and C Xenophontos ldquoNumerical analysis meetsnumber theory using rootfindingmethods to calculate inversesmod 119901119899rdquo Applicable Analysis and Discrete Mathematics vol 4no 1 pp 23ndash31 2010

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of

Page 9: Research Article Approximating the Inverse of a Square ...downloads.hindawi.com/journals/jam/2014/731562.pdf · our proposed scheme for generalized outer inverses, that is, (2), ,

Submit your manuscripts athttpwwwhindawicom

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttpwwwhindawicom

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Stochastic AnalysisInternational Journal of