T:A:L:K:S

close this window
title:
Convergence of gradient algorithm in hierachical tensor formats
name:
Schneider
first name:
Reinhold
location/conference:
SPP-JT13
PRESENTATION-link:
http://www.dfg-spp1324.de/nuhagtools/event_NEW/dateien/SPP-JT13/talks/Schneider_JT13.pdf
abstract:
In tensor product approximation, Hierarchical Tucker tensor format (Hackbusch) and Tensor Trains (TT) (Tyrtyshnikov) have been introduced recently offering stable and robust approximation by a low order cost . The corresponding ranks required for an approximation up to a given error depend on bilinear approximation rates and corresponding trace class norms.

For numerical computations, we cast the computation of an approximate ground solution into an optimization problem constraint by the restriction to tensors of prescribed ranks r. For approximation by elements from this highly nonlinear manifold, we apply a non-linear Galerkin framework, the extension for the dynamical problem corresponds to the Dirac Frenkel variational principle by describing the differential geometric structure of the novel tensor formats. We define several variants of gradient schemes and consider the convergence. Convergence could be shown with the help of the Lojasiewiscz inequality.