In tensor product approximation, Hierarchical Tucker tensor format (Hackbusch) and Tensor Trains (TT) (Tyrtyshnikov) have been introduced recently offering stable and robust approximation by a low order cost . The corresponding ranks required for an approximation up to a given error depend on bilinear approximation rates and corresponding trace class norms.
For numerical computations, we cast the computation of an approximate ground solution into an optimization problem constraint by the restriction to tensors of prescribed ranks r. For approximation by elements from this highly nonlinear manifold, we apply a non-linear Galerkin framework, the extension for the dynamical problem corresponds to the Dirac Frenkel variational principle by describing the differential geometric structure of the novel tensor formats. We define several variants of gradient schemes and consider the convergence. Convergence could be shown with the help of the Lojasiewiscz inequality.