![]() |
![]() |
![]() |
Heike Faßbender: Abstracts |
Heike Faßbender and Peter Benner to appear in Automatisierungstechnik 2006 Abstract
Bei der Modellierung dynamischer Systeme entstehen heutzutage häufig
Systeme hoher Ordnung (d.h. mit 10.000 und mehr Gleichungen). Um
eine numerische Simulation mit akzeptablem zeitlichem Umfang zu
gewährleisten, reduziert man das gegebene dynamische System von
Gleichungen zu einem System derselben Form, welches eine Lösung
mit stark verkürzter Rechenzeit erlaubt. Häufig wird gefordert,
daß das reduzierte System die selben Eigenschaften wie das
unreduzierte Modell aufweist; wichtige Eigenschaften sind in diesem
Zusammenhang insbesondere Stabilität und Passivität. Dieser Beitrag
gibt einen Überblick über numerische Verfahren zur
passivitätserhaltenden Modellreduktion. Heike Faßbender and Khakim D. Ikramov Preprint 2005 Abstract
Heike Faßbender and Khakim D. Ikramov Preprint 2005 Abstract
According to 'Matrix Analysis' by R. Horn and C. Johnson
\cite{HorJ85}, $\lambda$ is a coneigenvalue of a matrix $A \in \mathbb{C}^{n \times n}$
if there exists a nonzero vector $x \in \mathbb{C}^n$ such that
$A\overline{x} = \lambda x$. Coneigenvalues defined in this way may exist if and only
if $\overline{A}A$ has real nonnegative (ordinary) eigenvalues. If $A$ does have
coneigenvalues, there are always infinitely many of them.
Here we adopt a different definition that provides any matrix $A \in
\mathbb{C}^{n \times n}$ with a set of exactly $n$ coneigenvalues
(counting multiplicities). Based on this definition and on the
concept of a coninvariant subspace, introduced but not used much in
a paper by Y.P. Hong and R. Horn \cite{HonH85}, we attempt to
derive a coherent theory that incorporates the most important facts
in the theory of unitary congruence transformations, such as the
Youla theorem, normal forms for symmetric and conjugate-normal
matrices, and so on. A number of new facts on conjugate-normal
matrices are obtained in the course of this construction. For
example, we characterize conjugate-normal matrices as uniatry
congruence images of real normal matrices; in particular, this
implies that the singular values of a conjugate-normal matrix are
the moduli of its coneigenvalues.
SYMMLQ-like procedure for Ax=b where A is a special normal matrix, Heike Faßbender and Khakim D. Ikramov to appear in Calcolo, Preprint 2005 Abstract We propose a method of Lanczos type for
solving a linear system with a normal matrix whose spectrum is contained in
a second degree curve. This is a broader class of matrices than $(l,m)$-normal
matrices introduced in a recent paper by Barth and Manteuffel. Our approach
is similar to that by Huhtanen in the sense that both use the condensed form
of normal matrices discovered by Elsner and Ikramov. However, there are a
number of differences, among which are: (i) our method is modeled after the
SYMMLQ algorithm of Paige and Saunders; (ii) it uses only one matrix-vector
product per step; (iii) we provide effective means for monitoring the size
of the residual during the process. We present the results of numerical experiments,
which confirm the efficiency of our method. In particular, we include a comparison
of our SYMMLQ-like procedure with the GMRES algorithm of Saad and Schultz.
Structured Eigenvalue Problems, Heike Faßbender and Daniel Kressner GAMM Mitteilungen 29, No. 2, Themenheft Applied and Numerical Linear Algebra, Part II, pp. 297 - 318, 2006 Abstract
Most eigenvalue problems arising in practice are known to be structured. Structure
is often introduced by discretization and linearization techniques but may also
be a consequence of properties induced by the original problem. Preserving this
structure can help preserve physically relevant symmetries in the eigenvalues of the matrix
and may improve the accuracy and efficiency of an eigenvalue computation.
The purpose of this brief survey is to highlight these facts for some
common matrix structures. This includes a treatment of rather general concepts such as
structured condition numbers and backward errors as well as an overview
of algorithms and applications for several matrix classes including
symmetric, skew-symmetric, persymmetric, block cyclic, Hamiltonian, symplectic
and orthogonal matrices.
Several observations on symplectic, Hamiltonian, and skew-Hamiltonian matrices, Heike Faßbender and Khakim D. Ikramov Linear Algebra and its Applications 400, pp. 15 - 29, 2005 Abstract We prove a Hamiltonian/skew-Hamiltonian
version of the classical theorem relating strict equivalence and T-congruence
between pencils of complex symmetric or skew-symmetric matrices. Then, we
give a pure symplectic variant of the recent result of Xu concerning the
singular value decomposition of a conjugate symplectic matrix. Finally, we
discuss implications that can be derived from Veseli\'{c}'s result on definite
pairs of Hermitian matrices for the skew-Hamiltonian situation.
Computing matrix-vector products with centrosymmetric and centrohermitian matrices, Heike Faßbender and Khakim D. Ikramov Linear Algebra and its Applications 364, pp. 235 - 241, 2003 Abstract An algorithm proposed recently by A. Melman
reduces the costs of computing the product $Ax$ with a symmetric centrosymmetric
matrix $A$ as compared to the case of an arbitrary $A.$ We show that the
same result can be achieved by a simpler algorithm, which requires only that
$A$ be centrosymmetric. However, if $A$ is hermitian or symmetric, this can
be exploited to some extent. Also, we show that similar gains are possible
when $A$ is a skew-centrosymmetric or a centrohermitian matrix.
Computing roots
of matrix products, Abstract The problem of computing a kth root of
a matrix product W = \prod_{i=1}^k A_i is considered. The explicit computation
of W may produce a highly inaccurate result due to rounding errors, such that
the computed root \widehat W^{\frac{1}{k}} is far from the actual root W^{\frac{1}{k}}.
An algorithm for computing the kth root of W is presented which avoids the
explicit computation of W by employing the periodic Schur decomposition and
therefore yields better accuracy in the computed root \widehat W^{\frac{1}{k}}.
A Hybrid Method for
the Numerical Solution of Discrete-Time Algebraic Riccati Equations,
Abstract A discrete-time algebraic Riccati equation
(DARE) is a set of nonlinear equations. One of the oldest, best studied,
numerical methods for solving it, is Newton's method. Finding a stabilizing
starting guess which is already close to the desired solution is crucial.
We propose to compute an approximate solution of the DARE by the (butterfly)
$SZ$ algorithm applied to the corresponding symplectic pencil where zero and
infinity eigenvalues are removed using an iterative deflation strategy. This
algorithm is a fast, reliable and structure-preserving algorithm for computing
the stable deflating subspace of the symplectic matrix pencil associated with
the DARE. From this, a stabilizing starting guess for Newton's method is
easily obtained. The resulting method is very efficient and produces highly
accurate results. Numerical examples demonstrate the behavior of the resulting
hybrid method.
Hamilton and Jacobi
come full circle: Jacobi algorithms for structured Hamiltonian eigenproblems,
The Parameterized
SR Algorithm for Symplectic (Butterfly) Matrices, Abstract The $SR$ algorithm is a structure-preserving
algorithm for computing the spectrum of symplectic matrices. Any symplectic
matrix can be reduced to symplectic butterfly form. A symplectic matrix $B$
in butterfly form is uniquely determined by $4n-1$ parameters. Using these
$4n-1$ parameters, we show how one step of the symplectic $SR$ algorithm for
$B$ can be carried out in ${\cal O}(n)$ arithmetic operations compared to
${\cal O}(n^3)$ arithmetic operations when working on the actual symplectic
matrix. Moreover, the symplectic structure, which will be destroyed in the
numerical process due to roundoff errors when working with a symplectic (butterfly)
matrix, will be forced by working just with the parameters.
Cholesky-like Factorizations
of Skew-Symmetric Matrices, Abstract Every real skew-symmetric matrix $B$ admits
Cholesky-like factorizations $B = R^T J R$ where $J = \smallmatrix{0}{I}{-I}{0}$.
This paper presents a backward-stable ${\cal O}(n^{3})$ process for computing
such a decomposition, in which $R$ is a permuted triangular matrix. Decompositions
of this type are a key ingredient of algorithms for solving eigenvalue problems
with Hamiltonian structure.
An Implicitly Restarted
Symplectic Lanczos Method for the Symplectic Eigenvalue Problem, Abstract An implicitly restarted symplectic Lanczos
method for the symplectic eigenvalue problem is presented. The Lanczos vectors
are constructed to form a symplectic basis. The inherent numerical difficulties
of the symplectic Lanczos method are addressed by inexpensive implicit restarts.
The method is used to compute some eigenvalues and eigenvectors of large
and sparse symplectic matrices.
Error Analysis
of the symplectic Lanczos Method for the Symplectic Eigenvalue Problem, Abstract An error analysis of the symplectic Lanczos
algorithm for the symplectic eigenvalue problem in finite-precision arithmetic
is given, if no breakdown occurs. The analysis shows that the restriction
of preserving the symplectic structure does not destroy the characteristic
feature of nonsymmetric Lanczos processes. An analog of Paige's theory on
the relationship between the loss of orthogonality among the Lanczos vectors
and the convergence of Ritz values in the symmetric Lanczos algorithm is discussed.
As to be expected, it follows that (under certain assumptions) the computed
J-orthogonal Lanczos vectors loose J-orthogonality when some Ritz values
begin to converge.
On the Perturbation
Theory for the unitary Eigenproblem, Abstract Some aspects of the perturbation problem
of the eigenvalues of unitary matrices are treated. The mathematical structure
of the unitary matrices is closely analogous to the structure of symmetric/Hermitian
matrices. Using this relationship a Courant-Fischer-type theorem for the
eigenvalues of unitary matrices is presented and a Kahan-like inclusion theorem
is given.
Initializing Newton's
method for discrete-time algebraic Riccati equations using the butterfly SR
algorithm, Abstract The numerical solution of discrete-time
algebraic Riccati equations is discussed. We propose to compute an approximate
solution of the discrete-time algebraic Riccati equation by the (butterfly)
$SZ$ algorithm. This solution is then refined by a defect correction method
based on Newton's method. The resulting method is very efficient and produces
highly accurate results.
SR and SZ algorithms for
the symplectic (butterfly) eigenproblem, Abstract $SR$ and $SZ$ algorithms for the symplectic
(generalized) eigenproblem that are based on the reduction of a symplectic
matrix to symplectic butterfly form are discussed. A $2n\times 2n$ symplectic
butterfly matrix has $8n-4$ (generically) nonzero entries, which are determined
by $4n-1$ parameters. While the $SR$ algorithm operates directly on the matrix
entries, the $SZ$ algorithm works with the $4n-1$ parameters. The algorithms
are made more compact and efficient by using Laurent polynomials, instead
of standard polynomials, to drive the iterations.
Hamiltonian Square
Roots of Skew-Hamiltonian Matrices, Abstract We present a constructive existence proof
that every real skew-Hamiltonian matrix $W$ has a real Hamiltonian square
root. The key step in this construction shows how one may bring any such
$W$ into a real quasi-Jordan canonical form via symplectic similarity. We
show further that every $W$ has infinitely many real Hamiltonian square roots,
and give a lower bound on the dimension of the set of all such square roots.
On Sliding Window Schemes
for Discrete Least-Squares Approximation by Trigonometric Polynomials
Abstract Fast, efficient, and reliable algorithms
for up- and downdating discrete least-squares approximations of a real-valued
function given at arbitrary distinct nodes in $[0,2\pi)$ by trigonometric
polynomials are presented. A combination of the up- and downdating algorithms
yields a sliding window scheme. The algorithms are based on schemes for the
solution of (inverse) unitary eigenproblems and require only ${\cal O}(mn)$
arithmetic operations as compared to ${\cal O}(mn^2)$ operations needed for
algorithms that ignore the structure of the problem. Numerical examples are
presented that show that the proposed algorithms produce consistently accurate
results that are often better than those obtained by general QR decomposition
methods for the least-squares problem.
The Symplectic Eigenvalue
Problem, the Butterfly Form, the SR Algorithm, and the Lanczos Method Abstract We discuss some aspects of the recently
proposed symplectic butterfly form which is a condensed form for symplectic
matrices. Any $2n \times 2n$ symplectic matrix can be reduced to this condensed
form which contains $8n-4$ nonzero entries and is determined by $4n-1$ parameters.
The $SR$ algorithm preserves this form and can be modified to work only with
the $4n-1$ parameters instead of the $4n^2$ matrix elements. The reduction
of symplectic matrices to symplectic butterfly form has a close analogy
to the reduction of arbitrary matrices to Hessenberg form. A Lanczos-like
algorithm for reducing a symplectic matrix to butterfly form is also presented.
Error bounds in the isometric
Arnoldi process Abstract Error bounds for the eigenvalues computed
in the isometric Arnoldi method are derived. The Arnoldi method applied to
a unitary matrix $U$ successively computes a sequence of unitary upper Hessenberg
matrices $H_k, k = 1, 2, \ldots$. The eigenvalues of the $H_k$'s are increasingly
better approximations to eigenvalues of $U$. An upper bound for the distance
of the spectrum of $H_k$ from the spectrum of $U$, and an upper bound for
the distance between each individual eigenvalue of $H_k$ and one of $U$
are given. Between two eigenvalues of $H_k$ on the unit circle, there is
guaranteed to lie an eigenvalue of $U$. The results are applied to a problem
in signal processing.
Two Connections between
the SR and HR Eigenvalue Algorithms Abstract The SR and HR algorithms are members of
the family of GR algorithms for calculating eigenvalues and invariant subspaces
of matrices. This paper makes two connections between the SR and HR algorithms:
Inverse unitary eigenproblems and related orthogonal functions Heike Faßbender Numerische Mathematik, Vol. 77, pp. 323 - 345, 1997 Abstract This paper explores the relationship between
certain inverse unitary eigenvalue problems and orthogonal functions. In
particular, the inverse eigenvalue problems for unitary Hessenberg matrices
and for Schur parameter pencils are considered. The Szegö recursion
is known to be identical to the Arnoldi process and can be seen as an algorithm
for solving an inverse unitary Hessenberg eigenvalue problem. Reformulation
of this inverse unitary Hessenberg eigenvalue problem yields an inverse eigenvalue
problem for Schur parameter pencils. It is shown that solving this inverse
eigenvalue problem is equivalent to computing Laurent polynomials orthogonal
on the unit circle. Efficient and reliable algorithms for solving the inverse
unitary eigenvalue problems are given which require only O(mn) arithmetic
operations as compared with O(mn^2) operations needed for algorithms
that ignore the structure of the problem.
An Implicitly Restarted
Symplectic Lanczos Method for the Hamiltonian Eigenvalue Problem Abstract An implicitly restarted symplectic Lanczos
method for the Hamiltonian eigenvalue problem is presented. The Lanczos vectors
are constructed to form a symplectic basis. The inherent numerical difficulties
of the symplectic Lanczos method are addressed by inexpensive implicit restarts.
The method is used to compute eigenvalues, eigenvectors, and invariant subspaces
of large and sparse Hamiltonian matrices and low--rank approximations to
the solution of continuous--time algebraic Riccati equations with large and
sparse coefficient matrices.
A Jacobi--like method for
solving algebraic Riccati equations on parallel computers Abstract An algorithm to solve continuous--time algebraic
Riccati equations through the Hamiltonian Schur form is developed. It is
an adaption for Hamiltonian matrices of an unsymmetric Jacobi method of Eberlein
\cite{Ebe87}. It uses unitary symplectic similarity transformations and preserves
the Hamiltonian structure of the matrix. Each iteration step needs only
local information about the current matrix, thus admitting efficient parallel
implementations on certain parallel architectures. Convergence performance
of the algorithm is compared with the Hamiltonian--Jacobi algorithm of Byers
\cite{Bye90}. The numerical experiments suggest that the method presented
here converges considerably faster for non--Hermitian Hamiltonian matrices
than Byers' Hamiltonian--Jacobi algorithm. Besides that, numerical experiments
suggest that if for the method presented here the number of iterations needed
for convergence can be predicted by a simple function of the matrix size.
On Numerical Methods For
Discrete Least-Squares Approximation By Trigonometric Polynomials
Abstract Fast, efficient and reliable algorithms
for discrete least-squares approximation of a real-valued function given at
arbitrary distinct nodes in [0, 2pi( by trigonometric polynomials are
presented. The algorithms are based on schemes for the solution of inverse
unitary eigenproblems and require only O(mn) arithmetic operations
as compared to O(mn^2) operations needed for algorithms that ignore
the structure of the problem. An algorithm which solves this problem with
real-valued data and real-valued solution using only real arithmetic is given.
Numerical examples are presented that show that the proposed algorithms produce
consistently accurate results that are often better than those obtained by
general QR decomposition methods for the least-squares problem.
A Note
on Newbery's Algorithm for Discrete Least-Squares Approximation By Trigonometric
Polynomials Abstract Recently fast, efficient and reliable algorithms
for discrete least-squares approximation of a real-valued function given
at arbitrary distinct nodes in by trigonometric polynomials were presented
in different papers. These algorithms are based on schemes for the solution
of inverse unitary eigenproblems and require only O(mn) arithmetic
operations as compared to O(mn^2) operations needed for algorithms
that ignore the structure of the problem. In 1970 Newbery already presented
a O(mn) algorithm for solving the discrete least-squares approximation
by trigonometric polynomials. In this paper the connection between the different
algorithms is illustrated.
On the generalized Schur decomposition of a matrix
pencil for parallel computation, Abstract An algorithm to solve the generalized eigenvalue
problem $Ax=\lambda B x$ for arbitrary complex $n\times n$ matrices $A,B$
is developed. It is a generalization of an unsymmetric Jacobi method of Eberlein.
Each iteration step needs only local information about the current matrix,
thus admitting efficient parallel implementations on certain parallel architectures.
We discuss versions for an $\frac n2\times \frac n2$ square array of mesh--connected
processors and a ring of $\frac n2$ processors. Convergence performance
of the algorithm is compared with a recently proposed Jacobi--like method
for matrix pencils based on the unsymmetric Jacobi method of Stewart. The
numerical experiments suggest that the method presented here is considerably
faster for matrix pencils which are not near to normality. |