BOOK OF ABSTRACT NLAA 2013 International Conference on Numerical Linear Algebra and its

62
BOOK OF ABSTRACT NLAA 2013 International Conference on Numerical Linear Algebra and its Applications 15–18 January 2013 DEPARTMENT OF MATHEMATICS INDIAN INSTITUTE OF TECHNOLOGY GUWAHATI INDIA

Transcript of BOOK OF ABSTRACT NLAA 2013 International Conference on Numerical Linear Algebra and its

BOOK OF ABSTRACT

NLAA 2013

International Conference onNumerical Linear Algebra and its Applications

15–18 January 2013

DEPARTMENT OF MATHEMATICSINDIAN INSTITUTE OF TECHNOLOGY GUWAHATI

INDIA

Acknowledgements

Professor B. V. Limaye, Emeritus Professor, IIT Bombay has played a pivotal role in initiatingthe event. The organizers would like to thank him for this and for his constant intellectualand moral support. The organizers would also like to thank the following organizations forsponsoring the event.

• Council of Scientific and Industrial Research (CSIR)

• International Mathematical Union - Committee for Developing Countries (IMU - CDC)

• Indian National Science Academy (INSA)

• Defence Research & Development Organisation (DRDO)

• Spoken Tutorial Project, National Mission on Education through ICT

• National Board for Higher Mathematics (NBHM)

• Oil India Limited

CSIR IMU - CDC INSA DRDO Spoken Tutorial Project

NBHM Oil India Limited

i

Organizers

Local Organizing Committee:

Prof. Rafikul Alam (Chairman)Department of MathematicsIIT Guwahati

Dr. Shreemayee Bora (Secretary)Department of MathematicsIIT Guwahati

Prof. S. N. BoraDepartment of MathematicsIIT Guwahati

Dr. Kalpesh KapoorDepartment of MathematicsIIT Guwahati

Dr. G. SajithDepartment of Computer Science & Engg.IIT Guwahati

Prof. B. K. SarmaDepartment of MathematicsIIT Guwahati

Prof. R. K. SinhaDepartment of MathematicsIIT Guwahati

Scientific Committee:

Prof. Rafikul AlamDepartment of MathematicsIIT Guwahati

Prof. R. B. BapatISI Delhi

Dr. Shreemayee BoraDepartment of MathematicsIIT Guwahati

Prof. B. V. LimayeDepartment of MathematicsIIT Bombay

Prof. Volker MehrmannInstitute for MathematicsTU Berlin

Prof. Harish PillaiDepartment of Electrical EngineeringIIT Bombay

Dr. G. SajithDepartment of Computer Science & Engg.IIT Guwahati

ii

Preface

Numerical Linear Algebra is concerned with the theory and practical aspects of computingsolutions of three fundamental problems which are at the core of science and engineering,viz., linear systems of equations, least-squares problems and eigenvalue problems. Reliablesoftware for solving these problems are of utmost importance for most scientific processes andfinite precision arithmetic poses significant challenges to the numerical accuracy of solutions.Also the rapid advances and changes in the computer hardware make it necessary to keeprevisiting existing software for these problems so that they make the most efficient use of theavailable computing resources. Moreover, the need to design efficient mathematical models ofreal world phenomena constantly gives rise to challenging problems in the area. Meeting thechallenges in computing also leads to a better understanding of the theoretical aspects of theseproblems and vice versa.

This conference envisages an opportunity for young researchers interested in such problems tointeract with some of the leading experts and active researchers in the area and have a glimpseof the state of the art research in the subject and its many applications. It also aims to encourageyoung researchers to present their work to some of the most eminent researchers in the field.

iii

Schedule

iv

v

vi

vii

Contents

Acknowledgements i

Organizers ii

Preface iii

Schedule iv

Entanglement of Multipartite Systems: Few Open ProblemsBibhas Adhikari 1

Numerical Gradient Algorithms for Eigen Value CalculationsAyaz Ahmad 2

Backward Errors for Eigenvalues and Eigenvectors of Structured Eigenvalue ProblemsSk Safique Ahmad 3

Recycling BiCGSTABKapil Ahuja 5

Sensitivity Analysis of Rational Eigenvalue ProblemRafikul Alam and Namita Behera 6

Application of Structured Linearization for Efficient Computation of the H∞ Norm of aTransfer MatrixMadhu N. Belur 7

Linear and Nonlinear Matrix Equations Arising in Model ReductionPeter Benner and Tobias Breiten 10

Finite Element Model Updating: A Structured Inverse Eigenvalue Problem for QuadraticMatrix PencilBiswa Nath Datta 11

viii

Structured Eigenvalue Condition Numbers for Parameterized Quasiseparable MatricesFroilan M. Dopico 12

Structured Eigenvalue Problems – Structure-Preserving Algorithms, StructuredError AnalysisHeike Faßbender 13

Comparative Study of Tridiagonalization Methods for Fast SVDBibek Kabi, Tapan Pradhan, Ramanarayan Mohanty and Aurobinda Routray 14

Distance of a Pencil from Having a Zero at InfinityRachel Kalpana Kalaimani 17

Extracting Eigenvalues and Eigenvectors of a QEVP in the form of Homogenous Co-ordinates using Newton-Raphson TechniqueKaruna Kalita, Aritra Sasmal and Seamus D Garvey 20

The Separation of two Matrices and its Application in the Perturbation Theory of Eigen-values and Invariant SubspacesMichael Karow 22

Computation Considerations for the Group Inverse for an Irreducible M-MatrixStephen Kirkland 23

Conditioning of a basisBalmohan V. Limaye 24

Robust Stability of Linear Delay Differential-Algebraic SystemsVu Hoang Linh 25

Minimal Indices of Singular Matrix Polynomials: Some Recent PerspectivesD. Steven Mackey 26

Structured Backward Errors for Eigenvalues of Hermitian PencilsShreemayee Bora, Michael Karow, Christian Mehl and Punit Sharma 27

Self-adjoint Differential Operators and Optimal ControlPeter Kunkel, Volker Mehrmann and Lena Scholz 28

ix

Matrix Functions with Specified EigenvaluesMichael Karow, Daniel Kressner, Emre Mengi, Ivica Nakic and Ninoslav Truhar 29

B†-splittings of MatricesDebasisha Mishra 30

Characterization and Construction of the Nearest Defective Matrix via Coalescence ofPseudospectral ComponentsMichael L. Overton, Rafikul Alam, Shreemayee Bora and Ralph Byers 31

Linearizations of Matrix Polynomials in Bernstein BasisD. Steven Mackey and Vasilije Perovic 32

A Linear Algebraic Look at Differential Ricatti Equation and Algebraic Ricatti (in)equalityH. K. Pillai 34

A Regularization Based Method for Dynamic Optimization with High Index DAEsSoumyendu Raha 35

A Constructive Method for Obtaining a Preferred Basis from a Quasi-preferred Basis forM -matricesManideepa Saha and Sriparna Bandopadhyay 36

An I/O Efficient Algorithm for Hessenberg ReductionS. K. Mohanty and G. Sajith 37

Sparse Description of Linear Systems and Application in Computed TomographyR. Ramu Naidu, C. S. Sastry and J.Phanindra Varma 39

Structured Backward error of Approximate Eigenvalues of T-palindromic PolynomialsPunit Sharma, Shreemayee Bora, Michael Karow and Christian Mehl 42

On the Numerical Solution of large-scale Linear Matrix EquationsValeria Simoncini 43

Eigenvalues of Symmetric Interval Matrices Using a Single Step Eigen Perturbation MethodSukhjit Singh and D. K. Gupta 44

x

Nonnegative Generalized Inverses and Certain Subclasses of Singular Q-matricesK. C. Sivakumar 45

Accurate Eigenvalue Decomposition of Arrowhead Matrices and ApplicationsIvan Slapnicar, Nevena Jakovcevic Stor and Jesse Barlow 46

A Family of Iterative Methods for Computing the Moore–Penrose Generalized InverseShwetabh Srivastava and D. K. Gupta 47

Distance Problems for Hermitian Matrix Pencils - an ε-Pseudospectra Based ApproachShreemayee Bora and Ravi Srivastava 48

What’s So Great About Krylov Subspaces?David S. Watkins 49

Author Index 50

xi

Bibhas AdhikariCentre of Excellence in Systems ScienceIIT Rajasthan, Jodhpur, India.

[email protected]

+91 291 251 9006

+91 291 251 6823

Entanglement of Multipartite Systems: Few Open ProblemsBibhas Adhikari

The mystery of entanglement of multipartite systems is still not completely revealed althoughits usefulness is recognized in quantum information processing. In the axiomatic formulation ofquantum mechanics, a multipartite quantum state lies in the tensor product of complex Hilbertspaces. Hence, developing the linear algebraic tools for determining the entangled states hasbecome computationally challenging except for the 2 × 2 and 2 × 3 systems. This calls forthe development of graph representations of quantum states with a hope that graph theoreticmachinery could be applied to unfold the secrets of entanglement. In this talk, we discussfew open problems in linear and/or multilinear algebra including a nonlinear singular valueproblem that arise in formulating a geometric measure for entanglement. We also discussthe graph representation of multipartite quantum states and few open problems in the field ofcombinatorial matrix theory.

1

Ayaz AhmadNIT Patna, Bihar 800005

[email protected]

8809493586

Numerical Gradient Algorithms For Eigen Value CalculationsAyaz Ahmad

In this paper we discuss a numerical algorithm, termed, the double bracket algorithm

Hk+1 = e−αk[Hk,N ]Hkeαk[Hk,N ]

is proposed for computing the eigen values of an arbitrary symmetric matrix. For suitablysmall αk, termetime –steps, the algorithm is an approximation of the solution to the contin-uous – time – double bracket equation. Here we use an analysis of convergence behaviourshowing linear convergence to the desired limit points is presented. Associated with the mainalgorithms presented for the computation of the eigen values or singular values of matrices arealgorithm evolving on Lie-groups of orthogonal matrices which compute the full eigen spacesdecomposition of given matrices.

2

Sk Safique AhmadM block, IET DAVV CampusIndian Institute of Technology IndoreKhandwa Road, Indore 452017, India.

[email protected]

+91 731 2438731

+91 7312364182

Backward Errors for Eigenvalues and Eigenvectors of StructuredEigenvalue Problems

Sk Safique Ahmad

There are many applications on polynomial/rational eigenvalue problems in various directions,few of them will be highlighted along with their perturbation analysis. Some known results willbe discussed on perturbation analysis of various structured eigenvalue problems and then wewill highlight our results. Basically, in this talk we propose a general framework discussed in[4, 5, 6] for the structured perturbation analysis of several classes of structured matrix polyno-mials in homogeneous form, including complex symmetric, skew-symmetric, Hermitian, skewHermitian, even and odd matrix polynomials. Also we discuss structured backward error ofan approximate eigenpair of a structured homogeneous matrix polynomial with T-palindromic,H-palindromic, T-antipalindromic, and H-antipalindromic. We introduce structured backwarderrors for approximate eigenvalues and eigenvectors, and then, we construct minimal struc-tured perturbations such that an approximate eigenpair is an exact eigenpair of an appropri-ately perturbed matrix polynomial. This work extends previous work of [1, 2, 3, 7] for thenon-homogeneous case (we include infinite eigenvalues) and we show that the structured back-ward errors improve the known unstructured backward errors. Further, we extend some ofthese results to rational eigenvalue problems and to construct minimal structured perturbationssuch that an approximate eigenpair is an exact eigenpair of an appropriately perturbed rationaleigenvalue problems.

Note: This work is joint with Volker Mehrmann, Institut fur Mathematik, Technische Univer-sitat Berlin, Germany.

References

[1] B. Adhikari. Backward perturbation and sensitivity analysis of structured polynomialeigenvalue problem. PhD thesis, IIT Guwahati, Dept. of Mathematics, 2008.

[2] B. Adhikari and R. Alam. On backward errors of structured polynomial eigenproblemssolved by structure preserving linearizations, Linear Algebra Appl., 434: 1989–2017.

3

[3] B. Adhikari and R. Alam. Structured backward errors and pseudospectra of structuredmatrix pencils. SIAM J. Matrix Anal. Appl., 31:331–359, 2009.

[4] S. S. Ahmad and V. Mehrmann, Perturbation analysis for complex symmetric, skewsymmetric, even and odd matrix polynomials, Electr. Trans. Num. Anal., 38: 275–302,2011.

[5] S. S. Ahmad and V. Mehrmann, Backward errors for eigenvalues and eigenvectors of Her-mitian, skew-Hermitian, H-even, and H-odd matrix polynomials, Linear and MultilinearAlgebra, reprint 2012.

[6] S. S. Ahmad and V. Mehrmann, Structured backward errors for structured nonlineareigenvalue problems, preprint 2012.

[7] X. G. Liu and Z. X. Wang, A note on the backward errors for hermite eigenvalue prob-lems. Appl. Math. Comput., 165:405–417, 2005.

[8] R.C. Li, W.W. Lin, and C.S. Wang, Structured backward error for palindromic polynomialeigenvalue problems, Numer. Math. 116 (2010) 95–122

4

Kapil AhujaMax Planck Institute for Dynamics of

Complex Technical SystemsSandtorstr. 1, 39106 Magdeburg, Germany.

[email protected]

(+49) (0) 391 6110 349

(+49) (0) 391 6110 500

Recycling BiCGSTABKapil Ahuja, Eric de Sturler and Peter Benner

Krylov subspace recycling is a process for accelerating the convergence of sequences of linearsystems. Based on this technique we have recently developed the Recycling BiCG algorithm.We now generalize and extend this recycling theory to BiCGSTAB. We modify the BiCGSTABalgorithm to use a recycle space, which is built from left and right approximate eigenvectors.

We test on three application areas. The first problem simulates crack propagation in a metalplate using cohesive finite elements. The second example involves parametric model orderreduction of a butterfly gyroscope. The third and final example comes from acoustics. Experi-ments on these applications give promising results.

5

Namita BeheraDepartment of MathematicsIndian Institute of Technology GuwahatiGuwahati 781039, India.

[email protected]

9508640369

Sensitivity Analysis of Rational Eigenvalue ProblemRafikul Alam and Namita Behera

We discuss sensitivity of rational eigenvalue problem of the form R(λ) =∑d

i=0Aiλi +L(C −

λD)−1U . In particular, we derive condition number of simple eigenvalues of R. We alsodiscuss linearization of R(λ) and its effect on the sensitivity of eigenvalues of R.

6

Madhu N. BelurDepartment of Electrical EngineeringIndian Institute of Technology BombayPowai, Mumbai 400 076, India.

[email protected]

+91 22 2576 7404

+91 22 2572 3707

Application of Structured Linearization for Efficient Computation of theH∞ Norm of a Transfer Matrix

Madhu N. Belur

TheH∞-norm of a system’s transfer matrix plays a key role in control studies, for example, inthe context of robust control. TheH∞ norm of a transfer matrix G(s) is defined as

‖G‖H∞ := supλ∈C,Real (λ)>0

σmax(G(λ))

where σmax(P ) is the maximum singular value of a constant matrix P . The norm exists if andonly if G is proper and has all its poles in the open left half complex plane. When it exists,the procedure to compute the precise value of the norm γ involves a computationally intensiveprocedure using an iteration on a parameter dependent Hamiltoninan matrix constructed fromG. The current procedure available in numerical computation packages like Scilab and Matlabinvolves iteration on the γ value by checking whether the Hamiltonian matrix has or does nothave eigenvalues on the imaginary axis. Of course, within each such iteration on the γ value isnested the iteration to compute the eigenvalues of a constant matrix. The precision to which theH∞ norm is sought is specified through the tolerance band upto which the procedure checkswhether or not the Hamiltonian matrix has imaginary axis eigenvalues.

This work brings out the use of structured linearization of a Bezoutian polynomial matrix tocompute just once the generalized eigenvalues of a matrix pair and thus drastically reduces thecomputation involved in calculating theH∞ norm of a transfer matrix. Moreover, the precisionin the calculated value of the norm is the machine precision: typically 10−16.

The method we use first utilizes the key property that when the parameter γ is equal to theH∞ norm, then the Hamiltonian matrix has repeated eigenvalues on the imaginary axis; this iscaptured in the property that a certain polynomial p(γ, ω) and its derivative with respect to ω,denoted by say q(γ, ω), are not coprime at this value of γ. Hence, we reconsider the bivariatepolynomials p(γ, ω) and q(γ, ω) as polynomials in ω with coefficients in γ: denoted respec-tively as pγ(ω) and qγ(ω). Determining coprimeness of two polynomials is straightforward:the resultant of the two polynomials is zero if and only if they have a common factor.

We thus calculate the resultant R(γ) of the two polynomials pγ and qγ and compute the rootsof R(γ) to obtain all candidates γ that make pγ and qγ have a common root ω0. Though

7

it is conceptually easier to calculate the resultant R(γ) as the determinant of the Sylvestermatrix, the computationally easier way is to construct the Bezoutian matrix B(γ) of the twopolynomials pγ and qγ and use the property that the coefficient matrices of the polynomialmatrix B(γ) are symmetric. The matrix pencil (A,E) obtained by the structured linearizationof B(γ) helps in calculating the generalized eigenvalues of (A,E) and thus the roots of theresultant R(γ). This is elaborated in what follows.

With a slight abuse of notation, let the coefficient of ωi in the polynomial pγ(ω) be denoted bypi(γ), and similarly so for qγ:

pγ(ω) = p0(γ)+ωp1(γ)+ · · ·+ωNpN(γ) and qγ(ω) = q0(γ)+ωq1(γ)+ · · ·+ωN−1qN−1(γ)

and define their Bezoutian polynomial bγ(ζ, η) and the Bezoutian matrix B(γ) as follows

bγ(ζ, η) :=pγ(ζ)qγ(η)− pγ(η)qγ(ζ)

ζ − η=

...ζN−1

T B(γ)

...ηN−1

with B(γ) = B0 + γB1 + · · · + γmBm defined to obtain the above equality. Clearly, B(γ) isa symmetric polynomial matrix, i.e. each of B0, . . . , Bm are constant real symmetric matrices.Rather than determining the roots of detB(γ), we use the results in [2] to obtain a structuredlinearization: define the symmetric matrices E and A as

E :=

Bm−Bm−2 ··· −B1 −B0

... . .. . .. 0

−B1 . ..−B0 0

A :=

−Bm−1 ··· −B1 −B0

... . .. . .. 0

−B1 . ..−B0 0

.The development in [2] helps conclude that the generalized eigenvalues of (A,E) are the rootsof detB(γ). Thus we obtain a finite number of candidates γ, one of which is ‖G‖H∞ .

The above procedure when implemented in Scilab and compared with the current procedureavailable in Scilab (and Matlab) shows remarkable improvement in computation time and ac-curacy. In spite of the precision of generalized eigenvalues being of the order of 10−16 andthat using the Hamiltonian matrix iteration being 10−8 (the default value), the time taken usingstructured linearization is about 20 to 40 times less than the current available method. More-over, this comparison becomes more favourable to the new method for larger orders of thetransfer matrix G(s). See [1] for an elaborate treatment of the procedure1 and details of thecomputational experiments.

1The author thanks Dr. C. Praagman, close collaboration with whom resulted in the new method and [1].

8

References

[1] M.N. Belur and C. Praagman, An efficient algorithm to compute the H-infinity norm, IEEETransactions on Automatic Control, vol. 56, no. 7, pages 1656-1660, 2011.

[2] D. Mackey, N. Mackey, C. Mehl, and V. Mehrmann, Structured polynomial eigenvalueproblems: good vibrations from good linearizations, SIAM Journal of Matrix Analysis andApplications, vol. 28, no. 4, pages 1029-1051, 2006.

9

Peter BennerMax Planck Institute for Dynamics of

Complex Technical SystemsSandtorstr. 139106 Magdeburg, Germany.

[email protected]

+49 391 6110 450

+49 391 6110 453

Linear and Nonlinear Matrix Equations Arising in Model ReductionPeter Benner and Tobias Breiten

System-theoretic model reduction methods like balanced truncation for linear state-space sys-tems require the solution of certain linear or nonlinear matrix equations. In the linear case, theseare Lyapunov or algebraic Riccati equations. Generalizing these model reduction methods tonew system classes, variants of these matrix equations have to be solved. Primarily, we willdiscuss the generalized matrix equations associated to bilinear and stochastic control systems,where in addition to the Lyapunov operator, a positive operator appears in the formulation ofthe equations. Due to the large-scale nature of these equations in the context of model orderreduction, we study possible low rank solution methods for these so-called bilinear Lyapunovequations. We show that under certain assumptions one can expect a strong singular valuedecay in the solution matrix allowing for low rank approximations. We further provide somereasonable extensions of some of the most frequently used linear low rank solution techniquessuch as the alternating directions implicit (ADI) iteration and the extended Krlyov subspacemethod. By means of some standard numerical examples used in the area of bilinear modelorder reduction, we will show the efficiency of the new methods.

If time permits, we also discuss some new classes of nonlinear matrix equations arising inspecial variants of balanced truncation.

10

Biswa Nath Datta, IEEE FellowDistinguished Research ProfessorNorthern Illinois UniversityDeKalb, Illinois 60115, USA

[email protected]

815 753 6759

815 753 1112

Finite Element Model Updating: A Structured Inverse EigenvalueProblem for Quadratic Matrix Pencil

Biswa Nath Datta

The finite element model updating problem is a special inverse eigenvalue problem for aquadratic matrix pencil and arises in vibration industries in the context of designing automo-biles, air and space crafts, and others. The problem is to update a very large theoretical finiteelement model with more than a million degree of freedom using only a few measured datafrom a real-life structure. The model has to be updated in such a way that the measured eigen-values and eigenvectors are incorporated into the model, the symmetry of the original modelis preserved and the eigenvalues and eigenvectors that do not participate in updating remainunchanged. When the model has been updated this way, the updated model can be used forfuture design with confidence. Finite Element Model Updating has also useful applications inhealth monitoring and damage detection in structures, including bridges, buildings, highways,and others.

Despite much research done on the problem both by academic and industrial researchers andengineers, the problem has not been satisfactorily solved and an active research is still un-derway. There are many industrial solutions which are ad hoc in nature and often lack solidmathematical foundations.

In this talk, I shall present a brief overview of the existing techniques and their practical diffi-culties along with the new developments within the last few years. The talk will conclude witha few words on future research direction on this topic.

11

Froilan M. DopicoICMAT and Department of MathematicsUniversidad Carlos III de MadridAvda. de la Universidad 30,28911 Leganes (Madrid), Spain

[email protected]

+34916249446

+34916249129

Structured Eigenvalue Condition Numbers for ParameterizedQuasiseparable Matrices

Froilan M. Dopico

Rank structured matrices of different types appear in many applications and have received con-siderable attention in the last years from theoretical and numerical points of view. In particular,the development of fast algorithms for performing all tasks of Numerical Linear Algebra withrank structured matrices has been, and still is, a very active area of research. The key ideafor developing these fast algorithms is to represent n × n rank structure matrices in terms ofO(n) parameters and to design algorithms that run directly on the parameters instead of onthe matrix entries. Therefore these fast algorithms usually produce tiny backward errors onthe parameters and it would be natural to study the condition numbers of different magnitudeswith respect perturbations of the parameters. However, there are not results published in theliterature on this topic and the goal of this talk is to present a number of first results on thisarea. To this purpose, we consider a very important class of rank-structured matrices: qua-siseparable matrices. It is well-known that n × n quasiseparable matrices can be representedin terms ofO(n) parameters or generators, but that these generators are not unique. In this talk,we plan to develop for first time eigenvalue condition numbers of quasiseparable matrices withrespect tiny relative perturbations of the generators and to compare these conditions numbersfor different specific set of generators of the same matrix, with the aim of determining whichrepresentation in preferable for eigenvalue computations.

12

Heike FaßbenderAG NumerikInstitut Computational MathematicsTU Braunschweig, Fallersleber-Tor-Wall 2338106 Braunschweig, Germany.

[email protected]

+49 531 391 7535

+49 531 391 8206

Structured Eigenvalue Problems – Structure-Preserving Algorithms,Structured Error Analysis

Heike Faßbender

Many eigenvalue problems arising in practice are structured due to (physical) properties in-duced by the original problem. Structure can also be introduced by discretization and lineariza-tion techniques. Preserving this structure can help preserve physically relevant symmetries inthe eigenvalues of the matrix and may improve the accuracy and efficiency of an eigenvaluecomputation. This is well-known for n×n real symmetric matrices A = AT . Every eigenvalueis real and every right eigenvector is also a left eigenvector belonging to the same eigenvalue.Many numerical methods, such as QR, Arnoldi and Jacobi-Davidson automatically preservesymmetric matrices (and hence compute only real eigenvalues), unavoidable round-off errorscan not result in the computation of complex-valued eigenvalues. Algorithms tailored to sym-metric matrices (e.g., divide and conquer or Lanczos methods) take much less computationaleffort and sometimes achieve high relative accuracy in the eigenvalues and – having the rightrepresentation of A at hand – even in the eigenvectors.

Another example are matrices for which the complex eigenvalues with nonzero real part theo-retically appear in a pairing λ, λ, λ−1, λ−1. Using a general eigenvalue algorithm such as QRor Arnoldi results here in computed eigenvalues which in general do not display this eigen-value pairing any longer. This is due to the fact, that each eigenvalue is subject to unstructuredrounding errors, so that each eigenvalue is altered in a slightly different way. When using astructure-preserving algorithm this effect can be avoided, as the eigenvalue pairing is enforcedso that all four eigenvalues are subject to the same rounding errors.

This talk focuses on structure-preserving algorithms and structured error analysis such as struc-tured condition numbers and backward errors presenting the state-of-the art for a few classesof matrices.

13

Bibek KabiDepartment of Advanced Technology

and Development CenterIIT Kharagpur, Kharagpur, 721302, India.

[email protected]

09749713677

+49 531 391 8206

Comparative Study of Tridiagonalization Methods for Fast SVDBibek Kabi, Tapan Pradhan, Ramanarayan Mohanty and Aurobinda Routray

Singular Value Decomposition (SVD) is a powerful tool in digital signal and image process-ing applications. Eigenspace based method is useful for face recognition in image processing,where fast SVD of the image covariance matrix is computed. SVD of dense symmetric ma-trices can be computed using either one step or two step iterative numerical algorithms. Onestep method includes Jacobi’s and Hestenes algorithm, which generate accurate singular valuesand vectors. However, the time of execution increases with the dimension of matrices. Com-putation of SVD of dense symmetric matrices via two step method consists of two stages: 1.Bidiagonalization or Tridiagonalization and 2. Diagonalization. Tridiagonalization is a signif-icant step the dense symmetric matrix. and its performance affects the performance of SVD.Therefore, in this article a comparative analysis of available tridiagonalization methods hasbeen carried out to select the fast and accurate tridiagonalization method, to be used as firststep for fast SVD computation. Lanczos tridiagonalization method with partial orthogonaliza-tion, among available tridiagonalization methods is found to be fast and most accurate. Hence,Lanczos tridiagonalization with partial orthogonalization, along with Divide and Conquer al-gorithm could be the fast SVD algorithm for computing SVD of symmetric matrices.

For any dense real symmetric matrix An×n, it is possible to find an orthogonal Qn×n such thatT = QTAQ, where T is the symmetric tridiagonal matrix. There are various tridiagonalizationmethods like Givens rotation, Householder reflection and Lanczos method. Householder andGivens methods are well known for tridiagonalization of a dense symmetric matrix. Givensmethod is used to reduce a dense symmetric matrix into its equivalent symmetric tridiagonalform by multiplying a sequence of appropriately chosen elementary orthogonal transformationmatrices. Householder reflection overwrites A with T = QTAQ, where T is tridiagonal and Qis an orthogonal matrix, and is the product of Householder transformation matrices.Lanczos algorithm produces a symmetric tridiagonal matrix T and a set of Lanczos vectorsqj, j = 1, . . . , n, which forms the columns of the orthogonal matrix Q. Lanczos uses a three-term recurrence equation to reduce a symmetric matrix into its tridiagonal form. Lanczosalgorithm without orthogonalization became popular for tridiagonalizing a matrix. However,the loss of orthogonality among Lanczos vectors, complicates the relationship betweenA’s sin-gular values and those of T . To Circumvent this problem, various orthogonalization schemes

14

have been introduced, like full, selective, and partial orthogonalization. Orthogonalizing eachLanczos vector against all previous vectors is called complete or full orthogonalization. Ateach iteration the lanczos vector is orthogonalized by the Gram-schimdt process. However,full orthogonalization scheme proved to be much expensive with respect to storage require-ments and execution time. Therefore, selective orthogonalization scheme has been introduced.In this scheme, Lanczos vectors are orthogonalized against a few selected vectors, namely theRitz vectors which have nearly converged. However, one of the major drawback of selec-tive orthogonalization is the factorization of Tk. This involves again the solution of symmetrictridiagonal singular value problem using algorithms like Divide and Conquer, QR factorization.There are other drawbacks mentioned by S. Qiao in his work. To overcome these drawbackspartial orthogonalization has been introduced. It orthogonalizes Lanczos vectors only to thesquare root of the machine epsilon and removes the computation of Ritz vectors.

A comparative study of various tridiagonalization methods has been presented based on or-thogonality error, factorization error and elapsed time. Orthogonality and factorization er-ror have been named as Oerror and Ferror respectively. Orthogonality error was quite lessin all the cases. However, factorization error and elapsed time play a major role, formingthe basis of comparison. For symmetric matrices of sizes 100 and 300 Ferror and Elapsedtime (secs) obtained from Givens method are (1.4668e + 003, 4.085310secs) and (7.6239e +003, 1043.504051secs) respectively. The results for same matrices obtained from Householdertransformation are (3.6013e+003, 1.101101secs) and (3.1710e+004, 107.275468secs) respec-tively. Lanczos method with partial orthogonalization produces the following results (1.0501e−005, 0.049968secs) and (6.1891e − 005, 0.313961secs). Hence, it is seen that Lanczos withpartial orthogonalization proves to be much effective in terms of errors and elapsed time.The elapsed time for large matrices of sizes 1000 and 1500 with partial orthogonalization are12.424729 and 45.833286secs respectively. Hence, Lanczos with partial orthogonalization canbe used for tridiagonalizing large image covariance matrices in less time, ensuring fast SVD indigital signal and image processing applications. Experimental results show that all the tridi-agonalization schemes effectively reduces a dense symmetric matrix to its tridiagonal form,whereas, Lanczos with Partial Orthogonalization proved to be much efficient. Lanczos tridiag-onalization with partial orthogonalization scheme may be combined with Divide and Conqueralgorithm for computation of fast SVD of dense symmetric matrices. Fixed-point format of thefast SVD can be developed and implemented in embedded platforms like ARM, DSPs, FPGAsfor face, eye tracking and other digital signal and image processing applications.

15

References

[1] G. H. Golub and C. F. Van Loan, Matrix Computations Third Editions, The Jhons HopkinsUniversity Press, Baltimore and London, (1996).

[2] S. Qiao, Orthogonalization Techniques for the Lanczos Tridiagonalization of ComplexSymmetric Matrices, Proc. of SPIE Vol. 5559 (2004), 423–434.

[3] C. C. Paige, Error Analysis of the Lanczos Algorithm for Tridiagonalizing a SymmetricMatrix, IMA Journal of Applied Mathematics (1992) 18, 341–349.

16

Rachel Kalpana Kalaimani212, Control & Computing LabDept. of Electrical Engg.Indian Institute of Technology BombayMumbai 400076, India.

[email protected]

91 9920731380

Distance of a Pencil from Having a Zero at InfinityRachel Kalpana Kalaimani

Objective For a given pair (E,A) such that the matrix pencil sE−A has no zeros at infinity,compute a nearest perturbed pair (E, A) so that the pencil sE − A has a zero at infinity.

IntroductionWe consider a linear time invariant system represented in the descriptor state space form asfollows:

Ex = Ax+Bu.

Assume determinant of (sE−A) 6≡ 0. When E is singular the free response of the system mayhave impulsive modes. This corresponds to the matrix pencil sE − A ‘losing rank at s = ∞’.If this happens to be the case then the matrix pencil sE − A is said to have a zero at infinity.More precisely sE − A is said to have a zero at infinity if the degree of the determinant of(sE − A) < rank E. (Refer [1] for a precise definition of zeros at infinity.) An importantfact to be noted here is that the matrix E being singular is only a necessary condition for thepresence of zeros at infinity. However for the generalized eigenvalue problem,E being singularis a necessary and sufficient condition for the pencil sE−A to have eigenvalues at infinity. Wefollow the system theoretic notion about zeros at infinity as in this case the number of zeros atinfinity corresponds to the number of impulsive solutions in a dynamical system. Refer [2] formore about this distinction.

Problem StatementGiven: E,A ∈ Rn×n, rank E = r such that the pencil sE − A has no zero at infinity. LetX := {(E, A) ∈ Rn×n × Rn×n | sE − A has a zero at infinity }. Find

1. ε = min(E,A)∈X (‖E − E‖2 + ‖A− A‖2).

2. A pair (E, A) that achieves the ε.

17

ResultsSince the problem is about perturbing only the coefficient matrices, it would be helpful tocharacterize the condition for no zeros at infinity in terms of the coefficient matrices. Thefollowing lemma does this.

Lemma 1 Consider a pair (E,A), E,A ∈ Rn×n with rank of E equal to r. Let M ∈ R(n−r)×n

full row rank be such that ME = 0. Then sE − A has no zeros at infinity if and only ifdim(MA ker E) = n− rank E.

Thus the pencil sE − A has zeros at infinity if and only if a smaller matrix derived from A isnonsingular. The metric is defined by the 2 norm. Hence for a given nonsingular matrix thedistance from the nearest singular matrix is the smallest singular value of the former matrix.This fact is crucially used in the main result which is the following algorithm which providesthe perturbation amount ε and a pair (E, A) as required.

Algorithm 1 Input: (E,A) and r := rank E such that sE − A has no zeros at infinity.Output: ε and (E, A).Define Ak : A([n− k+ 1 : n], [n− k+ 1 : n]). This corresponds to the submatrix of A formedby the last k rows and k columns of A. Let nullity of E be t, i.e. t = n− r.Step 1: Find the SVD of E; UEV = E

′ . Hence E ′ =[

diag (σ1, . . . , σr, 0, . . . , 0)]

whereσ1 >, . . . ,> σr are the singular values of E. Then A′ := UAV .Step 2: Let k = 0 and min = σt(A

′t), where σt(A

′t) refers to the largest singular value of A′t .

Perform the following iteration.For i = 0 to r − 1

If σr−i(E) > minStopElseIf (σr−i(E) + σt+1+i(A

′t+1+i)) < min

min= σr−i(E) + σt+1+i(A′t+1+i)

k = i+ 1End.Hence ε =min and E = U−1

[diag (σ1, . . . , σr−k, 0, . . . , 0)

]V −1.

Step 3: Find the singular values of A′t+k and let them be σ′1, . . . , σ′

t+k. Let U1 and V1 be thematrices involved in computing the SVD of A′t+k. Define U2 and V2 as follows.

U2 =

[Ir−k 0

0 U1

]and V2 =

[Ir−k 0

0 V1

]Then it follows that

U2A′V2 =

A′11 A′12

A′21

σ′1.

σ′t+k

and let A2 :=

A′11 A

′12

A′21

. . .σ′t+k−1

0

18

where A′11, A′12 and A′21 are submatrices of A′ . Therefore calculate A = U−11 U−12 A2V

−12 V −11 .

References

[1] A.I.G. Vardulakis, “ Linear Multivariable Control, Algebraic Analysis and SynthesisMethods”, John Wiley and Sons, Chichester, 1991.

[2] P. V. Dooren and P. Dewilde, “The eigenstructure of an arbitrary polynomial matrix: com-putational aspects”, Linear Algebra and its Applications, vol. 50, pp. 545-579, 1983.

19

Karuna KalitaDepartment of Mechanical EngineeringIndian Institute of Technology GuwahatiGuwahati 781039, India.

[email protected]

+91 (0)361 2582680

Extracting Eigenvalues and Eigenvectors of a QEVP in the form ofHomogenous Co-ordinates using Newton-Raphson Technique

Karuna Kalita1, Aritra Sasmal1 and Seamus D Garvey2

1Department of Mechanical EngineeringIndian Institute of Technology Guwahati

Guwahati 781039, India.

2Department of Mechanical, Materials and Manufacturing EngineeringUniversity of Nottingham

University Park, Nottingham NG7 2RD, United Kingdom

The equation of motion of a second order system is

Mq + Dq + Kq = f

where {K,D,M} are the system stiffness, damping and mass matrices and their dimensionsare N × N . f is the vector of generalised forces of dimension N × 1 and q is the vector ofgeneralised displacements of dimension N × 1. {K,D,M} are not assumed to be symmetricand the mass matrix may not be invertible.

Two systems {K0,D0,M0} and {K1,D1,M1} are related by a Structure-Preserving Equiv-alence [1] if there are two invertible (2N × 2N) matrices {TL,TR} such that the LancasterAugmented Matrices (LAMs) of system {K0,D0,M0} are transformed to become the LAMsof {K1,D1,M1} and the transformations will be

TTLK0TR = K1, TT

LD0TR = D1 and TTLM0TR = M1

The quadratic eigenvalue-eigenvector problem (QEVP) of interest in homogeneous coordinatesform as described in [2]

(kiK + diD +miM)vi = 0wTi (kiK + diD +miM) = 0

20

vi and wi are the right and left eigenvectors respectively of dimension 2N × 2 and M, D andK are the matrices of size 2N × 2N and have the following structures

M :=

[0 KK D

], D :=

[K 00 −M

]and K :=

[−D −M−M 0

]In this paper a method for extracting eigenvalues and eigenvectors of a QEVP in the form ofhomogeneous coordinates using Newton-Raphson technique is presented.

References

[1] S. D. Garvey, M. I. Friswell and U. Prells, Coordinate Transformations for Second OrderSystems. Part I: General Transformations, Journal of Sound and Vibration, volume 258,issue 5, pages 885-909, 2002.

[2] S. D. Garvey, Basic Mathematical Ideas behind a Rayleigh Quotient Method for the QEVP(Unpublished manuscript).

21

Michael KarowInstitut fur MathematikTechnische Universitat BerlinStraße des 17. Juni 13610623 Berlin, Germany.

[email protected]

+49 3031425004

+49 3031421264

The Separation of two Matrices and its Application in the PerturbationTheory of Eigenvalues and Invariant Subspaces

Michael Karow

We discuss the three definitions of the separation of two matrices given by Stewart, Varah andDemmel. Then we use the separation in order to obtain an inclusion theorem for pseudospectraof block triangular matrices. Furthermore, we present two perturbation bounds for invariantsubspaces and compare them with the classical bounds of Stewart and Demmel.

22

Stephen KirklandStokes Professor, Hamilton InstituteNational University of Ireland Maynooth

[email protected]

+353 (0)1 708 6797

Computation Considerations for the Group Inverse for an IrreducibleM-Matrix

Stephen Kirkland

M-matrices arise in numerous applied settings, and a certain generalised inverse – the groupinverse – of an irreducible M-matrix turns out to yield useful information. For example, per-turbation theory for stationary distribution vectors of Markov chains, sensitivity analysis forstable population vectors in mathematical ecology, and effective resistance in resistive electri-cal networks all rely on the use of the group inverse of an irreducible M-matrix.

How then can we compute such a group inverse? In this talk, we will survey several knownmethods for computingthe group inverse of an irreducible M-matrix, and then discuss somesensitivity and perturbation results.

23

Balmohan V. LimayeDepartment of MathematicsIndian Institute of Technology BombayPowai, Mumbai 400076, India.

[email protected]

Conditioning of a BasisBalmohan V. Limaye

We define condition numbers of a basis of a finite dimensional normed space in terms of thenorms of the basis elements and the distances of the basis elements from their complementarysubspaces. Optimal scaling strategies are determined. Condition numbers of a basis of an innerproduct space can be calculated explicitly in terms of the diagonal entries of the correspondingGram matrix and of its inverse. We give estimates for the change in the condition numberswhen a given basis is transformed by a nonsingular matrix to another basis. Our conditionnumbers arise naturally when we address the problem of computing the dual basis with thegiven basis as the data.

24

Vu Hoang LinhFaculty of MathematicsMechanics and InformaticsVietnam National University - Hanoi334, Nguyen Trai, Thanh Xuan, Hanoi,Vietnam

[email protected]

+84 4 38581135

+84 4 38588817

Robust Stability of Linear Delay Differential-Algebraic SystemsVu Hoang Linh

In this talk, we discuss the robust stability of linear time-invariant delay differential-algebraicsystems with respect to structured and admissible perturbations. To measure the distance toinstability, the concept of stability radii is used. First, existing results on stability radii ofdifferential-algebraic systems without delay are briefly summarized. Then, we show how someof these results can be extended to the case of differential-algebraic systems with delay which ismore complicated than the non-delay case. As a main result, under certain structure conditions,the formula of the (complex) stability radius is obtained. In addition, the asymptotic behaviourof the stability radii of discretised systems is characterised as the stepsize tends to zero. Thetalk is based on joint works with Nguyen Huu Du, Do Duc Thuan, and Volker Mehrmann.

25

D. Steven MackeyDept. of MathematicsWestern Michigan University1903 W. Michigan AveKalamazoo, MI 49008-5248

[email protected]

269 387 4539

269 387 4530

Minimal Indices of Singular Matrix Polynomials: Some RecentPerspectives

D. Steven Mackey

In addition to eigenvalues, singular matrix polynomials also possess scalar invariants called“minimal indices” that are significant in a number of application areas, such as control theoryand linear systems theory. In this talk I will discuss recent developments in our understandingof several fundamental issues concerning minimal indices, starting with the very definition ofthe concept. Next the question of how the minimal indices of a polynomial are related to thoseof its linearizations is considered.

Finally I describe the “index sum theorem”, a fundamental relationship between the elementarydivisors and minimal indices of any matrix polynomial, and some of its consequences.

26

Christian MehlInstitute of MathematicsTU BerlinMA 4-5, 10623 Berlin, Germany.

[email protected]

(+49) 30 314 25741

(+49) 30 314 79706

Structured Backward Errors for Eigenvalues of Hermitian PencilsShreemayee Bora, Michael Karow, Christian Mehl and Punit Sharma

In this talk, we consider the structured backward errors for eigenvalues of Hermitian pencilsor, in other words, the problem of finding the smallest Hermitian perturbation so that a givenvalue is an eigenvalue of the perturbed Hermitian pencil.

The answer is well known for the case that the eigenvalue real, but in the case of nonrealeigenvalues, only the structured backward error for eigenpairs has been considered so far, i.e.,the problem of finding the smallest Hermitian perturbation so that a given pair is an eigenpairof the perturbed Hermitian pencil.

In this talk, we give a complete answer to the question by reducing the problem to an eigenvalueminimization problem of Hermitian matrices depending on two real parameters. We will seethat the structured backward error of complex nonreal eigenvalues may be significatly differentfrom the corresponding unstructured backward error - which is in contrast to the case of realeigenvalues where the structured and unstructured backward errors coincide.

27

Volker MehrmannInstitut f. Mathematik, MA 4-5, Str. des 17.Juni 136, D-10623 Berlin, Germany

[email protected]

+493031425736

+493031479706

Self-adjoint Differential Operators and Optimal ControlPeter Kunkel, Volker Mehrmann and Lena Scholz

Motivated by the structure which arises e. g. in the necessary optimality boundary value prob-lem of DAE constrained linear-quadratic optimal control, a special class of structured DAEs,so-called self-adjoint DAEs, is studied in detail. It is analyzed when and how this structure isactually associated with a self-conjugate operator. Local structure preserving condensed formsunder constant rank assumptions are developed that allow to study existence and uniquenessof solutions. A structured global condensed form and structured reduced models based onderivative arrays are developed as well. Furthermore, the relationship between DAEs with self-conjugate operator and Hamiltonian systems are analyzed and it is characterized when there isan underlying symplectic flow.

28

Emre MengiKoc UniversityRumelifeneri Yolu 34450Sariyer, Istanbul, Turkey.

[email protected]

+90 212 3381658

+90 212 3381559

Matrix Functions with Specified EigenvaluesMichael Karow, Daniel Kressner, Emre Mengi, Ivica Nakic and Ninoslav Truhar

The main object we study is a matrix function depending on a parameter analytically. A singu-lar value optimization characterization is derived for a nearest matrix function with prescribedeigenvalues from such a matrix function with respect to the spectral norm. We start by consid-ering the linear matrix pencil case, that was motivated by an inverse shape estimation problem.The derivation for a linear pencil of the form L(λ) = A+ λB extensively exploits the solutionspace of an associated Sylvester equation of the formAX+BXC = 0, where C is upper trian-gular with diagonal entries selected from the prescribed eigenvalues. Kroneckerization of theSylvester equation yields a singular value optimization characterization. We obtain the singularvalue optimization characterization for a matrix polynomial by means of a linearization of thematrix polynomial, and applying the machinery for the linear pencil case. Finally, the moregeneral characterization for an analytic matrix function is obtained from a matrix polynomialby incorporating polynomial interpolation into the derivation.

Many of the widely-studied distance problems fall into the scope of this work, e.g., distanceto instability, distance to uncontrollability, distance to a nearest defective matrix, and theirgeneralizations for matrix polynomials as well as analytic matrix functions.

29

Debasisha MishraInstitute of Mathematics and ApplicationsAndharua, Bhubaneswar 751 003, India.

[email protected]

9337573138

+90 212 3381559

B†-splittings of MatricesDebasisha Mishra

A new type of matrix splitting called B†-splittings ([1]) generalizing the notion of B-splitting([2]) is introduced first. Then a characterization of nonnegative Moore-Penrose inverse usingthe proposed splitting is obtained. Another convergence and two comparison results for theB†-splittings as well as B-splittings are finally discussed.

The above work aims to solve a rectangular system of linear equations by iterative methodusing a matrix splitting. Then the problem becomes an eigenvalue problem i.e. finding thespectral radius of the iteration scheme. The contents of the present work is from the author’searlier work ([1] and [3]) jointly written with Dr. K. C. Sivakumar of IIT Madras. The sametechnique also can be used to compute Moore-Penrose inverse of a matrix.

References

[1] Mishra, Debasisha; Sivakumar, K. C., On Splittings of matrices and Nonnegative Gener-alized Inverses, Operators and Matrices 6 (1) (2012) 85-95.

[2] Peris, Josep E., A new characterization of inverse-positive matrices, Linear Algebra Appl.154/156 (1991) 45-58.

[3] Mishra, Debasisha; Sivakumar, K. C., Comparison theorems for a subclass of propersplittings of matrices, Appl. Math. Lett. 25 (2012) 2339-2343.

30

Michael L. OvertonProfessor of Computer Science and Mathe-maticsCourant Institute of Mathematical SciencesNew York University, USA.

[email protected]

+1 212 998 3121

Characterization and Construction of the Nearest Defective Matrix viaCoalescence of Pseudospectral Components

Michael L. Overton, Rafikul Alam, Shreemayee Bora and Ralph Byers

Let A be a matrix with distinct eigenvalues and let w(A) be the distance from A to the setof defective matrices (using either the 2-norm or the Frobenius norm). Define Λε, the ε-pseudospectrum of A, to be the set of points in the complex plane which are eigenvalues ofmatrices A+E with ‖E‖ < ε, and let c(A) be the supremum of all ε with the property that Λε

has n distinct components. Demmel and Wilkinson independently observed in the 1980s thatw(A) ≥ c(A), and equality was established for the 2-norm by Alam and Bora in 2005. Wegive new results on the geometry of the pseudospectrum near points where first coalescence ofthe components occurs, characterizing such points as the lowest generalized saddle point of thesmallest singular value of A − zI over z ∈ C. One consequence is that w(A) = c(A) for theFrobenius norm too, and another is the perhaps surprising result that the minimal distance isattained by a defective matrix in all cases. Our results suggest a new computational approach toapproximating the nearest defective matrix by a variant of Newton’s method that is applicableto both generic and nongeneric cases. Construction of the nearest defective matrix involvessome subtle numerical issues which we explain, and we present a simple backward error anal-ysis showing that a certain singular vector residual measures how close the computed matrixis to a truly defective matrix. Finally, we present a result giving lower bounds on the anglesof wedges contained in the pseudospectrum and emanating from generic coalescence points.Several conjectures and questions remain open.

31

Vasilije PerovicDepartment of MathematicsWestern Michigan UniversityKalamazoo, Michigan 49008

[email protected]

269 270 1900

269 387 4530

Linearizations of Matrix Polynomials in Bernstein BasisD. Steven Mackey and Vasilije Perovic

Bernstein polynomials were introduced just over a hundred years ago, and since then havefound numerous applications, most notably in computer aided geometric design [2]. Due totheir importance, a significant amount of research already exists for the scalar case. On theother hand, matrix polynomials expressed in the Bernstein basis were studied only recently [1].Two considerations led us to systematically study such matrix polynomials: the increasing useof non-monomial bases in practice, and the desirable numerical properties of the Bernsteinbasis.

For a matrix polynomial P (λ), the classical approach to solving the polynomial eigenvalueproblem P (λ)x = 0 is to first convert P into a matrix pencil L with the same finite and infi-nite elementary divisors, and then work with L. This method has been well studied when Pis expressed in the monomial basis. But what about the case when P (λ) is in the Bernsteinbasis? It is important to avoid reformulating P (λ) into the monomial basis, since a change ofbasis can introduce numerical errors that were not present in the original problem. Using noveltools, we show how to work directly with P (λ) to generate large families of linearizations thatare also expressed in the Bernstein basis. We also construct analogs for the Bernstein basis ofthe vector spaces of linearizations introduced in [4], and describe some of their basic proper-ties. Connections with low bandwidth Fiedler-like linearizations, which could have numericalimpact, are also established.

Additionally, we extend the definitions of structured matrix polynomials in the monomial basisto arbitrary polynomial bases. In the special case of the Bernstein basis, we derive spectralpairings for certain classes of structured matrix polynomials and discuss the existence of struc-tured linearizations. We illustrate how existing structure preserving eigenvalue algorithms forstructured pencils in the monomial basis can be adapted for use with structured pencils in theBernstein basis.

Finally, we note that several results from [3, 5] are readily obtained by specializing our tech-niques to the case of scalar polynomials expressed in the Bernstein basis.

32

References

[1] A. AMIRASLANI, R. M. CORLESS & P. LANCASTER, Linearization of matrix polyno-mials expressed in polynomial bases, IMA Journal of Numerical Analysis, 29 (2009),pp. 141–157.

[2] R. T. FAROUKI, The Bernstein polynomial basis: a centennial retrospective, ComputerAided Geometric Design, 29 (2012), pp. 379–419.

[3] G. F. JONSSON & S. VAVASIS, Solving polynomials with small leading coefficients,SIAM J. Matrix Anal. Appl., 26 (2005), pp. 400–414.

[4] D. S. MACKEY, N. MACKEY, C. MEHL & V. MEHRMANN, Vector spaces of lineariza-tions for matrix polynomials, SIAM J. Matrix Anal. Appl., 27 (2006), pp. 821–850.

[5] J. R. WINKLER, A companion matrix resultant for Bernstein polynomials, Linear AlgebraAppl., 362 (2003), pp. 153–175.

33

H. K. PillaiDepartment of Electrical EngineeringIndian Institute of Technology BombayPowai, Mumbai 400 076, India.

[email protected]

+91 22 2576 7424

+91 22 2576 8424

A Linear Algebraic Look at Differential Ricatti Equation and AlgebraicRicatti (in)equality

H. K. Pillai

Differential Ricatti equation (DRE) plays a central role in optimal control problems. Closelyassociated to DRE is the Algebraic Ricatti (in)equality (ARI/ARE), which are usually solvednowadays using LMI techniques. I shall present a linear algebraic approach that providesseveral interesting insights into solutions of DRE, ARI and ARE. I shall also discuss how theseinsights may perhaps lead to more efficient algorithms for solving a spectrum of problemsarising in control theory and signal processing. These results have been obtained as joint workwith Sanand Athalye.

34

Soumyendu RahaSupercomputer Education and

Research CentreIndian Institute of Science, Bangalore.

[email protected]

+91 80 2293 2791

+91 80 2360 2648

A Regularization Based Method for Dynamic Optimization with HighIndex DAEs

Soumyendu Raha

We describe a consistent numerical method for solving dynamic optimization problems involv-ing high index DAEs via regularization of the Jacobian matrix of the method. In doing so, weuse Aronszajn’s lemma to relate the discretization and of the problem and the regularization.The method is illustrated with examples.

35

Manideepa SahaDepartment of MathematicsIIT Guwahati, Guwahati 781039, India.

[email protected]

+91 9678000530

A Constructive Method for Obtaining a Preferred Basis from aQuasi-preferred Basis forM -matrices

Manideepa Saha and Sriparna Bandopadhyay

AnM -matrix has the formA = sI−B, where s ≥ ρ(A) ≥ 0 andB is a entrywise nonnegativematrix. A quasi-preferred set is an ordered set of nonnegative vectors and the positivity of theentries of the vectors depends on the graph structure of A in a specified manner. A preferredset is a quasi-preferred basis such that the image of each vector under −A is a nonnegativelinear combination of the subsequent vectors and the coefficients in the linear combinationsalso depends entirely on the graph structure of A.

The existence of a preferred basis for the generalized eigenspace(i.e.,the null space of An withthe order of A is n) of an M -matrix A was proved by H. Schneider and D. Hershkowitz in1988 [2] . But their proof were by the ‘tracedown method’ which is essentially induction onthe diagonal blocks of the Frobenius Normal form of the matrix. As it is easier to obtain aquasi-preferred basis compared to a preferred basis, so we introduce a direct method for com-puting a preferred basis when a quasi-preferred basis is given. We also introduce some specialproperties of quasi-preferred basis of anM -matrix and those properties help us to introduce theconstructive method for obtaining a preferred basis starting from a quasi-preferred basis. Wealso illustrate the whole procedure with the help of some typical examples. And we concludeour work summarizing the whole procedure in the form of an algorithm.

References

[1] A. Berman and R. Plemmons, Nonnegative Matrices in the Mathematical Sciences,2nd ed., SIAM Publications, Philadelphia, PA, 1994.

[2] D. Hershkowitz, and H. Schneider, On the generalized nullspace of M -matrices and Z-matrices, Lin. Algebra Appl.,106:5–23 1988.

[3] U. Rothblum, Algebraic eigenspace of nonnegative matrices, Lin. Algebra Appl.,12:281–292, 1975.

36

G. SajithDepartment of Computer ScienceIndian Institute of Technology Guwahati

[email protected]

+91 8970100222

An I/O Efficient Algorithm for Hessenberg ReductionS. K. Mohanty and G. Sajith

In traditional algorithm design, it is assumed that the main memory is infinite in size andallows random uniform access to all its locations. This enables the designer to assume that allthe data fits in the main memory. (Thus, traditional algorithms are often called in-core.) Underthese assumptions, the performance of an algorithm is decided by the number of instructionsexecuted, and therefore, the design goal is to optimize it.

These assumptions may not be valid while dealing with massive data sets, because in reality,the main memory is limited, and so the bulk of the data may have to be stored in slow butinexpensive secondary memory. The number of instructions executed would no longer be areliable performance metric, but the number of in- puts/outputs (I/Os) executed would be. I/Osare slow because of large access times of secondary memories. While designing algorithms forlarge data sets, the goal is therefore to minimise the number of I/Os executed. The literatureis rich in efficient out-of-core algorithms for matrix computation. But very few of them aredesigned on the external memory model of Aggarwal and Vitter, and as such attempt to quantifytheir performances in terms of the number of I/Os performed.

This model introduced by Aggarwal and Vitter, has a single processor and a two level memory.It is assumed that the bulk of the data is kept in the secondary memory which is a permanentstorage. The secondary memory is divided into blocks. An I/O is defined as the transfer ofa block of data between the secondary memory and a volatile main memory, which is limitedin size. The processors clock period and the main memory access time are negligible whencompared to the secondary memory access time. The measure of performance of an algorithmis the number of I/Os it performs. Algorithms designed on this model are referred to as externalmemory algorithms. The model defines the following parameters: the size of the probleminput (N ), the size of the main memory (M ), and the size of a disk block (B). They satisfy1 ≤ B ≤M < N .

It has been shown that, on this model, the number of I/Os needed to read (write) N contiguousitems from (to) the disk is Scan(N ) = θ(N/B), and that the number of I/Os required to sort Nitems is Sort(N ) = θ((N/B) logM/B(N/B)). For all realistic values of N , B, and M , Scan(N )< Sort(N ) << N .

37

We study the problem of Hessenberg reduction on this model. We show that even the blockedvariant of Hessenberg reduction [Gregorio Quintana-Orti and Robert van de Geijn, Improv-ing the Performance of Reduction to Hessenberg Form, ACM Transactions on MathematicalSoftware 32 (June 2006), No. 2, 180194.], in spite of the 70% reduction it achieves in matrix-vector operations, has an I/O complexity of O(N3/B). We propose an Hessenberg reductionalgorithm with an I/O complexity that is asymptotically superior.

38

C. S. SastryIndian Institute of Technology, HyderabadHyderabad 502205, India.

[email protected]

+91 40 2301 6072

Sparse Description of Linear Systems and Application in ComputedTomography

R. Ramu Naidu, C. S. Sastry and J.Phanindra Varma

In recent years, sparse representation has emerged as a powerful tool for efficiently processingdata in non-traditional ways. This is mainly due to the fact that natural data of interest tendto have sparse representation in some basis. A wealth of recent optimization techniques [1] inapplied mathematics, by the name of Compressive Sampling Theory (CST or CS theory) aimat providing the sparse description of such data in redundant bases. The present abstract brieflywalks through the current developments in Sparse representation theory and shows how thedevelopments could be used for applications in Computed Tomography.

A full rank matrix A of size m × n (with m � n) generates an underdetermined systemof linear equations Ax = y having infinitely many solutions. The problem of finding thesparsest solution (that is, x for which |{i : xi 6= 0}| << n) has been answered positively andconstructively through the following optimization problem:

minα‖α‖1 subject to Aα = y. (1)

The need for the sparse representation arises from the fact that several real life applicationsdemand the representation of data in terms of as few basis (frame) elements as possible. Theelements or the columns of A are called atoms and the matrix so generated by them is called thedictionary. The current literature [1] indicates that CST has a number of possible applicationsin fields like data compression and medical imaging, to name a few. The developments of CSTdepend typically on sparsity and incoherence. Sparsity expresses the idea that the “informationrate” of a continuous time data may be much smaller than suggested by its bandwidth, or thata discrete-time data depends on a number of degrees of freedom which is comparably muchsmaller than its (finite) length. On the other hand, incoherence extends the duality between thetime and frequency contents of data.

The recent results in CST talk of properties of the redundant dictionaries and address the issuessuch as 1. what are all the recovery conditions that the matrix A has to satisfy so that (1)provides unique sparsest solution to Ax = y ? 2. how are the sparsity of x and the size ofy related ?. Candes et. al.[1] introduced Restricted Isometry Property (RIP) to establish the

39

theoretical guarantees for the stated sparse recovery. As verifying RIP is very difficult, theconcepts of widths of subsets, null space properties of A are being studied [1] to establish thetheoretical guarantees for sparse recovery.

Applications in Computed Tomography

The basic objective [2] in Computed Tomography (CT) is to obtain the high quality imagesfrom projection data obtained using different scanning geometries such as parallel, fan and coneor spiral cone beam with as less exposure and utmost efficiency as possible. In the followingsubsection, we outline how the reconstruction of tomography from the incomplete parallelbeam data and how the classification of CT images in the Radon domain can be realized.

Reconstruction of CT images from incomplete data

It is well known that the problem of reconstruction in CT from projection data can be addressedby solving the matrix equation p = RI [2], where p and I are projection data and the imagerespectively, and R is the projection matrix. There are several well known methods, like Al-gebraic Reconstruction Method (ART) [2], being used for solving the stated matrix equation.The recent CST provides a way out for faster reconstruction especially when we have limitedand noisy projection data:

I = Ψ−1[arg min

θ‖p−RΨθ‖2 + λ‖θ‖1

]. (2)

where Ψ is the suitable sparsifying basis for I (that is, I = ΨI1, with I1 being sparse).

Classification of CT images directly from projection data

The classification of CT images into various classes possessing various degrees of abnormal-ities is of relevance in medical imaging. Such a classification directly in Radon domain is away out for the automated classification of medical images. This approach is free from thereconstruction artifacts in classification. Motivated by the dictionary learning [1], we realizeour objective of classification of CT images in the Radon domain through the following opti-mization problem:

n∑j=1

K∑i=1

minDi,Ci,θl

∑xj∈Cj

‖Rxj −Dα‖22 + γ‖α‖1, where γ > 0. (3)

40

Acknowledgement: One of the authors (CSS) is thankful to DST, Govt. of India for thesupport (Ref: SR/FTP/ETA-54/2009) he received.

References

[1] Y. Eldar and G. Kutyniok, “Compressed Sensing: Theory and applications,” CambridgeUniv Press, 2012.

[2] F. Natterer, “Mathematical methods in image reconstruction”, SIAM, 2001.

41

Punit SharmaDepartment of MathematicsIndian Institute of Technology GuwahatiGuwahati 781039, India.

[email protected]

9957939372

Structured Backward Error of Approximate Eigenvalues ofT-palindromic Polynomials

Punit Sharma, Shreemayee Bora, Michael Karow and Christian Mehl

We study the backward error of approximate eigenvalues of T-palindromic polynomials withrespect to structure preserving perturbations that affect up to four coefficients. Expressions forthe backward error are given for T-palindromic pencils and quadratic polynomials. The sameare also obtained for real approximate eigenvalues of real T-palindromic cubic polynomials.Finally it is also shown that the analysis leads to a lower bound for the structured backwarderror of an approximate eigenvalue of any T-palindromic polynomial which is also an upperbound of the unstructured backward error.

42

Valeria SimonciniDipartimento di MatematicaUniversita di BolognaPiazza di Porta San Donato, 540126 Bologna, Italy

[email protected]

On the Numerical Solution of large-scale Linear Matrix EquationsValeria Simoncini

Linear matrix equations such as the Lyapunov and Sylvester equations play an important rolein the analysis of dynamical systems, in control theory, in eigenvalue computation, and in otherscientific and engineering application problems.

A variety of robust numerical methods exists for the solution of small dimensional linear equa-tions, whereas the large scale case still provides a great challenge.

In this talk we review several available methods, from classical ADI to recently developedprojection methods making use of “second generation” Krylov subspaces. Both algebraic andcomputational aspects will be considered.

43

Sukhjit SinghDepartment of MathematicsIndian Institute of Technology KharagpurKharagpur 721302, India.

[email protected]

+91 7872001816

Eigenvalues of Symmetric Interval Matrices Using a Single Step EigenPerturbation Method

Sukhjit Singh and D. K. Gupta

This paper deals with the eigenvalue problems involving interval symmetric matrices. The de-viation amplitude of interval matrix is considered as a perturbation around the nominal valueof the interval matrix. Using the concepts of interval analysis, interval eigenvalues are com-puted by the interval extension of the single step eigenvalue perturbation method which is animproved version of multi-step perturbation applied in a single step. Two numerical examplesare worked out and the results obtained are compared with the results of existing methods. It isobserved that our method is reliable, efficient and for all examples gives better results.

References

[1] G. Alefeld and G. Mayer; Interval analysis: theory and applications, Journal of Compu-tational and Applied Mathematics, Vol-121 (2000) pp. 421–464.

[2] S.S.A. Ravi, T.K. Kundra and B.C. Nakra; Single step eigen perturbation method forstructural dynamic modification, Mechanics Research Communications, Vol-22 (1995)pp. 363–369.

[3] Z.P. Qiu, S. Chen and I. Elishakoff; Bounds of eigenvalues for structures with an intervaldescription of uncertain-but-non-random parameters, Chaos, Solitons and Fractals, Vol-7(1996) pp. 425–434.

[4] Z.P. Qiu, I. Elishakoff and James H. Starnes Jr; The bound set of possible eigenvalues ofstructures with uncertain but non-random parameters, Chaos, Solitons and Fractals, Vol-7(1996) pp. 1845–1857.

44

K. C. SivakumarDepartment of MathematicsIndian Institute of Technology MadrasChennai 600 036, India.

[email protected]

+91 44 22574622

Nonnegative Generalized Inverses and Certain Subclasses of SingularQ-matrices

K.C. Sivakumar

The notion ofQ-matrices is quite well understood in the theory of linear complementarity prob-lems. In this article, the author considers three variations of Q-matrices, typically applicablefor singular matrices. The main result presents a relationship of these notions (for a Z-matrix)with the nonnegativity of the Moore-Penrose inverse of the matrix concerned.

45

Ivan SlapnicarUniversity of SplitFaculty of Electrical Engineering

Mechanical Engg. and Naval ArchitectureR. Boskovica 32, HR-21000 Split, Croatia.

[email protected]

Accurate Eigenvalue Decomposition of Arrowhead Matrices andApplications

Ivan Slapnicar, Nevena Jakovcevic Stor and Jesse Barlow

We present a new, improved, algorithm for solving an eigenvalue problem of real symmet-ric arrowhead matrix. Under certain conditions the algorithm computes all eigenvalues and allcomponents of the corresponding eigenvectors with high relative accuracy inO(n2) operations.The algorithm is based on shift-and-invert technique, where in some cases it may be necessaryto compute only one element of the inverse of the shifted matrix with double precision arith-metic. Each eigenvalue and the corresponding eigenvector are computed separately, whichmakes the algorithm suitable when only part of the spectrum is required and for parallel com-puting. We also present applications to Hermitian arrowhead matrices, symmetric tridiagonalmatrices and diagonal-plus-rank-one matrices, and give numerical examples.

46

Shwetabh SrivastavaDepartment of MathematicsIndian Institute of Technology KharagpurKharagpur 721302, India.

[email protected]

+91 8016367291

A Family of Iterative Methods for Computing the Moore–PenroseGeneralized Inverse

Shwetabh Srivastava and D. K. Gupta

Based on a quadratical convergence method proposed in [1], a family of iterative methodsto compute the Moore- Penrose generalized inverse of a matrix are presented. Convergenceanalysis along with the error estimates of the proposed method are discussed. The theoreticalproofs and numerical experiments show that these iterative methods are very effective.

References

[1] Marko D. Petkovic, Predrag S.Stanimirovic; Iterative method for computing Moore-Penrose inverse based on Penrose equations, Journal of Computational and AppliedMathematics volume 235, (2011)1604-1613.

[2] Haibin Chen, Yiju Wang, A Family of higher-order convergent iterative methods for com-puting the MoorePenrose inverse, Appl. Math. Comput. 218 (2011) 4012-4016

47

Ravi SrivastavaDepartment of MathematicsIIT Guwahati, Guwahati 781039, India.

[email protected]

09678883441

Distance Problems for Hermitian Matrix Pencils - an ε-PseudospectraBased Approach

Shreemayee Bora and Ravi Srivastava

Given a definite pencil L(z) = zA − B, we present a bisection type algorithm for computingits Crawford number and a nearest Hermitian pencil that is not definite with respect to the norm|‖L|‖ =

√‖A‖2 + ‖B‖2 where ‖.‖ denotes the 2-norm.

We also provide numerical experiments that compare the proposed algorithm with similar al-gorithms proposed in [1], [2] and [3].

The same technique may also be used to find the distance from a definitizable pencil to a nearestHermitian pencil that is not definitizable with respect to the norm |‖.|‖.

References

[1] Chun-Hua Guo, Nicholas J. Higham, and Francoise Tisseur, An improved arc algorithm fordetecting definite Hermitian pairs, SIAM J. Matrix Anal. Appl., 31(3):1131-1151, 2009.

[2] Nicholas J. Higham, Francoise Tisseur, and Paul M. Van Dooren, Detecting a definiteHermitian pair and a hyperbolic or elliptic quadratic eigenvalue problem, and associatednearness problems, Linear Algebra Appl., 351/352:455-474, 2002. Forth special issue onlinear systems and control.

[3] F. Uhlig, On computing the generalized Crawford number of a matrix, Linear AlgebraAppl.(2011) , doi:10.1016/j.laa.2011.06.024.

48

David S. WatkinsDepartment of MathematicsWashington State UniversityPullman, WA 99163-3113, USA.

[email protected]

001-509-335-7256

001-509-335-1188

What’s So Great About Krylov Subspaces?David S. Watkins

Everybody knows that Krylov subspaces are great. The most popular algorithms for solvinglinear systems and eigenvalue problems for large, sparse matrices are Krylov subspace meth-ods. I hope to convince you that they are even greater than you realized. As usual, I will focuson eigenvalue problems. Most general-purpose eigensystem algorithms, including John Fran-cis’s implicitly-shiftedQR algorithm, are based on the power method and its multi-dimensionalgeneralizations. We will discuss the synergy between the power method and Krylov subspaces,and we will show that by introducing Krylov subspaces into the discussion sooner than usual,we are able to get a simple, satisfying explanation of Francis’s and related algorithms that doesnot rely on making a connection with the “explicit” QR algorithm.

49

Author IndexAdhikari, Bibhas, 1Ahmad, Ayaz, 2Ahmad, Sk Safique, 3Ahuja, Kapil, 5Alam, Rafikul, 6, 31

Bandopadhyay, Sriparna, 36Barlow, Jesse, 46Behera, Namita, 6Belur, Madhu N., 7Benner, Peter, 5, 10Bora, Shreemayee, 27, 31, 42, 48Breiten, Tobias, 10Byers, Ralph, 31

Datta, Biswa Nath, 11Dopico, Froilan M., 12

Faßbender, Heike, 13

Garvey, Seamus D, 20Gupta, D. K., 44, 47

Kabi, Bibek, 14Kalaimani, Rachel Kalpana, 17Kalita, Karuna, 20Karow, Michael, 22, 27, 29, 42Kirkland, Stephen, 23Kressner, Daniel, 29Kunkel, Peter, 28

Limaye, Balmohan V., 24Linh, Vu Hoang, 25

Mackey, D. Steven, 26, 32Mehl, Christian, 27, 42Mehrmann, Volker, 28Mengi, Emre, 29Mishra, Debasisha, 30

Mohanty, Ramanarayan, 14Mohanty, S. K., 37

Naidu, R. Ramu, 39Nakic, Ivica, 29

Overton, Michael L., 31

Perovic, Vasilije, 32Pillai, H. K., 34Pradhan, Tapan, 14

Raha, Soumyendu, 35Routray, Aurobinda, 14

Saha, Manideepa, 36Sajith, G., 37Sasmal, Aritra, 20Sastry, C. S., 39Scholz, Lena, 28Sharma, Punit, 27, 42Simoncini, Valeria, 43Singh, Sukhjit, 44Sivakumar, K. C., 45Slapnicar, Ivan, 46Srivastava, Ravi, 48Srivastava, Shwetabh, 47Stor, Nevena Jakovcevic, 46Sturler, Eric de, 5

Truhar, Ninoslav, 29

Varma, J. Phanindra, 39

Watkins, David S., 49

50