October 26, 2004
Similar papers 5
July 6, 2016
The computation of the matrix exponential is a ubiquitous operation in numerical mathematics, and for a general, unstructured $n\times n$ matrix it can be computed in $\mathcal{O}(n^3)$ operations. An interesting problem arises if the input matrix is a Toeplitz matrix, for example as the result of discretizing integral equations with a time invariant kernel. In this case it is not obvious how to take advantage of the Toeplitz structure, as the exponential of a Toeplitz matrix...
December 28, 2023
The Paterson--Stockmeyer method is an evaluation scheme for matrix polynomials with scalar coefficients that arise in many state-of-the-art algorithms based on polynomial or rational approximation, for example, those for computing transcendental matrix functions. We derive a mixed-precision version of the Paterson--Stockmeyer method that is particularly useful for evaluating matrix polynomials with scalar coefficients of decaying magnitude. The key idea is to perform computat...
August 27, 2009
The numerical computation of the exponentiation of a real matrix has been intensively studied. The main objective of a good numerical method is to deal with round-off errors and computational cost. The situation is more complicated when dealing with interval matrices exponentiation: Indeed, the main problem will now be the dependency loss of the different occurrences of the variables due to interval evaluation, which may lead to so wide enclosures that they are useless. In th...
June 16, 2021
A matrix is called a P-matrix if all its principal minors are positive. P-matrices have found important applications in functional analysis, mathematical programming, and dynamical systems theory. We introduce a new class of real matrices denoted~$\EP$. A matrix is in~$\EP$ if and only if its matrix exponential is a P-matrix for all positive times. In other words, $A\in \EP$ if and only if the transition matrix of the linear system~$\dot x=Ax$ is a P-matrix for any positive t...
December 17, 2007
We derive explicit formulas for calculating $e^A$, $\cosh{A}$, $\sinh{A}, \cos{A}$ and $\sin{A}$ for a given $2\times2$ matrix $A$. We also derive explicit formulas for $e^A$ for a given $3\times3$ matrix $A$. These formulas are expressed exclusively in terms of the characteristic roots of $A$ and involve neither the eigenvectors of $A$, nor the transition matrix associated with a particular canonical basis. We believe that our method has advantages (especially if applied by ...
April 3, 2018
We propose a new second-order method for geodesically convex optimization on the natural hyperbolic metric over positive definite matrices. We apply it to solve the operator scaling problem in time polynomial in the input size and logarithmic in the error. This is an exponential improvement over previous algorithms which were analyzed in the usual Euclidean, "commutative" metric (for which the above problem is not convex). Our method is general and applicable to other setting...
January 19, 2016
We consider a continuous analogue of Babai et al.'s and Cai et al.'s problem of solving multiplicative matrix equations. Given $k+1$ square matrices $A_{1}, \ldots, A_{k}, C$, all of the same dimension, whose entries are real algebraic, we examine the problem of deciding whether there exist non-negative reals $t_{1}, \ldots, t_{k}$ such that \begin{align*} \prod \limits_{i=1}^{k} \exp(A_{i} t_{i}) = C . \end{align*} We show that this problem is undecidable in general, but dec...
September 14, 2017
The Cayley-Hamilton problem of expressing functions of matrices in terms of only their eigenvalues is well-known to simplify to finding the inverse of the confluent Vandermonde matrix. Here, we give a highly compact formula for the inverse of any matrix, and apply it to the confluent Vandermonde matrix, achieving in a single equation what has only been achieved by long iterative algorithms until now. As a prime application, we use this result to get a simple formula for expli...
April 18, 2016
We derive a numerical algorithm for evaluating the Riemannian logarithm on the Stiefel manifold with respect to the canonical metric. In contrast to the existing optimization-based approach, we work from a purely matrix-algebraic perspective. Moreover, we prove that the algorithm converges locally and exhibits a linear rate of convergence.
December 28, 2021
We extend the matrix-resolvent method for computing logarithmic derivatives of tau-functions to the Ablowitz--Ladik hierarchy. In particular, we derive a formula for the generating series of the logarithmic derivatives of an arbitrary tau-function in terms of matrix resolvents. As an application, we provide a way of computing certain integrals over the unitary group.