Public Documentation

Documentation for ExponentialAction.jl's public interface.

See the Internals section for documentation of internal functions.

Index

Public Interface

ExponentialAction.expvMethod
expv(t, A, B; shift=true, tol)

Compute $\exp(tA)B$ without computing $tA$ or the matrix exponential $\exp(tA)$.

Computing the action of the matrix exponential is significantly faster than computing the matrix exponential and then multiplying it when the second dimension of $B$ is much smaller than the first one. The "time" $t$ may be real or complex.

In short, the approach computes

\[F = T_m(tA / s)^s B,\]

where $T_m(X)$ is the Taylor series of $\exp(X)$ truncated to degree $m = m^*$. The term $s$ determines how many times the Taylor series acts on $B$. $m^*$ and $s$ are chosen to minimize the number of matrix products needed while maintaining the required tolerance tol.

The algorithm is described in detail in Algorithm 3.2 in [AlMohyHigham2011].

Keywords

  • shift=true: Expand the Taylor series about the $n \times n$ matrix $A-μI=0$ instead of $A=0$, where $μ = \operatorname{tr}(A) / n$ to speed up convergence. See §3.1 of [AlMohyHigham2011].
  • tol: The relative tolerance at which to compute the result. Defaults to the tolerance of the eltype of the result.
source
ExponentialAction.expv_sequenceMethod
expv_sequence(t::AbstractVector, A, B; kwargs...)

Compute $\exp(t_i A)B$ for the (sorted) sequence of (real) time points $t=(t_1, t_2, \ldots)$.

At each time point, the result $F_i$ is computed as

\[F_i = \exp\left((t_i - t_{i-1}) A\right) F_{i - 1}\]

using expv, where $t_0 = 0$ and $F_0 = B$. For details, see Equation 5.2 of [AlMohyHigham2011].

Because the cost of computing expv is related to the operator 1-norm of $t_i A$, this incremental computation is more efficient than computing expv separately for each time point.

See expv for a description of acceptable kwargs.

expv_sequence(t::AbstractRange, A, B; kwargs...)

Compute expv over the uniformly spaced sequence.

This algorithm takes special care to avoid overscaling and to save and reuse matrix products and is described in Algorithm 5.2 of [AlMohyHigham2011].

source