close this window
Adaptive Low-Rank Methods for High-Dimensional Second-Order Elliptic Problems
first name:
We consider the application of subspace-based tensor formats to high-dimensional operator equations on Hilbert spaces, and combine such tensor representations with adaptive basis expansions of the arising lower-dimensional components. This leads to a highly nonlinear type of approximation.
In this talk, we focus on problems posed on function spaces for which the inner products do not induce a cross norm, e.g., problems on Sobolev spaces such as second-order elliptic PDEs on product domains. We discuss the particular issues, related to general spectral properties of such elliptic operators, that arise in treating such problems using low-rank tensor expansions. In the particular case of wavelet representations that we are considering, preconditioning reduces to diagonal scaling, which, however, still turns out to be problematic for low-rank representations.
We present an approximate diagonal scaling operation suitable for tensor expansions and an iterative method - not tied to a fixed background discretization - that under standard assumptions can be guaranteed to converge to the solution of the continuous problem. Furthermore, under additional low-rank representation sparsity assumptions, the scheme constructs an approximate solution using a number of arithmetic operations that is optimal up to logarithmic terms. Here, the major difficulty lies in obtaining meaningful bounds for the tensor ranks of iterates. The practical efficiency of the method is demonstrated in numerical experiments.
The presented results are joint work with Wolfgang Dahmen.