Abstracts der zweiten Phase

Antragsteller: Belomestny (Duisburg), Bender (Saarbrücken)
wiss. Mitarbeiter: Dickmann (Duisburg), Schweizer (Saarbrücken)
The theory of optimal stopping is concerned with the problem of choosing a time to take a particular action, in order to maximize an expected reward or minimize an expected cost. Reflected backward stochastic di fferential equations can be considered as generalizations of optimal stopping problems when the reward functional may also depend on the solution. Such problems can be found in many areas of statistics, economics, and mathematical finance (e.g. the pricing problem of American options). Primal and dual approaches have been developed in the literature which give rise to Monte Carlo algorithms for high-dimensional stopping problems. Typically, these algorithms lead to some problems of functional convex optimization, where the original objective functionals are to be estimated by Monte Carlo. Despite of the convexity, the performance of these optimization algorithms will deteriorate sharply as the dimension of the underlying state space increases, unless there exists a good low-dimensional approximation for the optimal value function. The aim of this project is to develop several novel approaches based on the penalization of the corresponding empirical objective functionals which are able either to recover the most important components of the state space or to identify a sparse representation for the value function in a given class of functions.

Antragsteller: Dahmen (Aachen), Kutyniok (Berlin), Schwab (Zürich)
wiss. Mitarbeiter: Huang (Aachen), Lim (Berlin)
Many important multivariate problem classes are governed by anisotropic features such as singularities concentrated on lower dimensional embedded manifolds. Examples are digital images as well as solutions of hyperbolic conservation laws or more generally of transport dominated equations. Over the past few years substantial progress has been made in efficiently encoding signals with anisotropic features based on new directional representation systems like shearlets. Although these developments for explicitly given signals are far from complete their understanding has by far more matured than analogous considerations for implicitly given objects like solutions to operator equations of the above type. The central objective of this project is therefore the development and understanding of new discretization concepts that are able to economically and reliably capture anisotropic phenomena in solutions governed by anisotropic phenomena. A central issue is finding suitable variational formulations that on one hand support adaptive solution concepts for transport operators and, on the other hand, are suitable for treating parameter dependent and therefore also high dimensional problems.

Antragsteller: Dahlke (Marburg)

Antragsteller: Dahlke (Marburg), Maaß (Bremen), Stevenson (Amsterdam)
wiss. Mitarbeiter: Friedrich (Marburg), Ressel (Bremen)
This project is the continuation of the DFG-Rroject 'Adaptive Wavelet Frame Methods for Operator Equations: Sparse Grids, Vector-Valued Spaces and Applications to Nonlinear Inverse Parabolic Problems'. The aim of this project is the development of optimally convergent adaptive wavelet schemes for complex systems. Especially, we are concerned with (nonlinear) elliptic and parabolic operator equations on nontrivial domains as well as with the related inverse parameter identification problems. For the fonward problems, we use generalized tensor product approximation techniques that realize dimension independent convergence rates. In the first period of SPP 1324, these tensor wavelet bases have already been provided and associated adaptive wavelet algorithms have been designed, implemented, and tested. The tests performed during the first period Include the numerical solution of a prototypical Inverse parabolic parameter identification problem. In addition, the theoretical prerequisites for applying sparsity constrained Tikhonov regularization to such inverse problems have been proved. These investigations will be systematically continued in the second period. In particular, one central goal will be the generalization of such adaptive wavelet algorithms to nonlinear equations. Furthermore we aim at extending the theoretical investigation of Tikhonov-regularizatlon schemes with sparsity constraints by Incorporating positivity constraints. As a model problem we will study the parameter Identification problem for a parabolic reaction-diffusion system which describes the gene concentrations in embryos at an early state of development {embryogenesis).

Antragsteller: Dahlke (Marburg), Ritter (Kaiserslautern), Schilling (Dresden)
wiss. Mitarbeiter: Döhring (Kaiserslautern), Kinzel (Marburg), Lindner (Dresden)
This project is the continuation of the DFG-Project ‘Adaptive Wavelet Methods for SPDEs’ within the SPP 1324. This research proposal is about the numerical treatment of complex stochastic dynamical systems which are described by stochastic partial differential equations of parabolic type on Lipschitz domains. The overall goal of this project is to establish fully adaptive numerical schemes based on wavelets. By using a variant of the Rothe method, we intend to employ a semi-implicit time discretization scheme involving a suitable step-size control. Then, in each time step, an elliptic subproblem has to be solved. To this end, suitable variants of recently developed adaptive wavelet/frame schemes can be used. To get a theoretical underpinning of our algorithms, our investigations must involve regularity estimates of the solutions in specific scales of Besov spaces. In the first period of the SPP 1324, we have introduced a new, wavelet-based noise model and the adaptive wavelet schemes for the elliptic subproblems have already been designed, analysed, implemented and tested. Moreover, spatial Besov regularity results for the solution of linear equations on Lipschitz domains have been established. In the second period of the SPP 1324, these investigations will be systematically continued. The focus will be on the development of efficient variants of the Rothe method and the generalization of the Besov regularity results to nonlinear equations and full space-time regularity. As in the first period, the overall scheme will be implemented, tested and integrated into the Marburg software library.

Antragsteller: Dereich (Münster), Müller-Gronbach (Passau), Neuenkirch (Mannheim), Ritter (Kaiserslautern)
wiss. Mitarbeiter: Yaroslavtseva (Passau)
Different views on quadrature problems for stochastic differential equations (SDEs) lead to rather different algorithmic approaches, e.g., PDE methods based on the Fokker-Planck equation, deterministic or randomized high-dimensional numerical integration for explicitly solvable SDEs, and Monte Carlo simulation of SDEs. In this project we employ the concept of approximation of probability distributions as the basis for quadrature of SDEs, both, for constructing new deterministic and randomized algorithms, as well as for establishing optimality results. Our research is devoted to (I) Constructive Quantization of Systems of SDEs (II) Quadrature of Discontinuous Functionals (III) Multilevel Methods for Lévy-driven SDEs (IV) Multilevel Quadrature on the Sequence Space and Malliavin Regularity (V) A Numerical Toolbox in OpenCL/CUDA Part (I) is concerned with the construction of discrete distributions to approximate a target distribution and its application to the quadrature problem. In (II), Malliavin techniques and numerical schemes of higher order are analyzed for quadrature of discontinuous functionals. In (III), we develop simulation techniques for Lévy processes for the design of multilevel schemes and an analysis for continuous and discontinuous functionals is conducted. Recent progress in high-dimensional integration motivates the study in (IV). We build parallel implementations of the algorithms from (II) and (III) that make full use of the massive parallel processing units hidden in today’s graphic cards.

Antragsteller: Cliffe (Nottingham), Ernst (Freiberg), Starkloff (Zwickau)
wiss. Mitarbeiter: Mugler (Zwickau)
PDEs with random data constitute a primary technique of uncertainty quantification (UQ), in which variability and lack of information on the data of a differential equation are modeled by stochastic processes displaying significant correlations. Stochastic Galerkin (SG) methods are routinely used to approximate the solutions of such PDEs with random data, and much progress in their mathematical analysis and computational implementation has been made since the late 1990s. The second phase of this project intends to make a contribution in three aspects of SG methods: (1) A deeper analysis of the mathematical foundations underlying the representation of stochastic processes by polynomial chaos expansions and their generalizations, which underly the SG method. After giving a rigorous generalization of the Cameron-Martin theorem in the first phase, the second phase of the project will focus on quantitative results on the approximation properties of generalized polynomial chaos expansions as well as compare these with wavelet-based expansions. (2) Establishing well-posed formulations of elliptic PDEs with random coefficients: When the random coefficient of a stationary diffusion equation is not uniformly bounded away from zero and infinity, weighted L2-spaces are necessary to obtain well-posed variational formulations. We will investigate these and their implications on computational techniques. (3) We will develop and implement a Bayesian inversion methodology for UQ in inverse problems based on recently developed mathematical frameworks for Bayesian inversion in PDE models, which use SG approximation as a basic step. We will apply this techniqe to two case studies of inverse problems in groundwater flow and geophysical exploration.

Antragsteller: Friz (Berlin), Stannat (Darmstadt)
wiss. Mitarbeiter: Diehl (Berlin)
In 1998 T. Lyons (Oxford) suggested a new approach for the robust pathwise solution of stochastic di fferential equations which is nowadays known as the rough path analysis. Based on this approach a new class of numerical algorithms for the solution of stochastic differential equations have been developed. Recently, the rough path approach has been successfully extended also to stochastic partial di fferential equations. In stochastic filtering, the (unnormalized) conditional distribution of a Markovian signal observed with additive noise is called the optimal fi lter and it can be described as the solution of a stochastic partial diff erential equation which is called the Zakai equation. In the proposed project we want to apply the rough path analysis to a robust pathwise solution of the Zakai equation in order to construct robust versions of the optimal filter. Subsequently, we want to apply known algorithms based on the rough path approach to the numerical approximation of these robust estimators and further investigate their properties.

Antragsteller: Garcke (Bonn)
wiss. Mitarbeiter: Klompmaker (Berlin)
This project considers discretization approaches for continuous state spaces arising in optimal control and reinforcement learning and aims to make significant progress for function approximation in more than 6 dimensions using an adaptive sparse grid approach. In both closely related application areas one uses dynamic programming methods to (numerically) estimate the utility of taking actions in states of the domain; this gives rise to functions over the state space from which then optimal policies can be derived. Discretization techniques based on standard fi niteelement or fi nite-di fference techniques are widely used, but since the number of dimensions can be large one runs into the curse of dimensionality. We could show empirically that an adaptive sparse grid approach can handle typical optimal control type problems in up to 6 dimensions and therefore break the curse of dimensionality to a certain extent, although there are theoretical and practical problems for the current algorithm. It has to be investigated further under which conditions the numerical procedure can be successful. This includes further empirical investigations to push the efficiency and dimensionality boundary further, but in particular we aim to formulate a solid theoretical foundation of a finite diff erence style sparse grid approach to allow suitable selections of the arising parameters of the numerical scheme.

Antragsteller: Grasedyck (Aachen)
The aim of this research proposal is the further development of the data-sparse hierarchical tensor representations (hierarchical Tucker and TT) from the first phase of this project. Our goal is to introduce a dimension-adaptivity as well as a local separation technique. The latter one is required for the application to partial differential equations with stochastic parameters, as the deterministic space dependent variables are typically strongly linked to local parameters and thus destroy the (global) separability of deterministic and stochastic variables. Both techniques are not restricted to the aforementioned application area as they are of general interest for the understanding of hierarchical tensor formats.

Antragsteller: Hackbusch (Leipzig), Schneider (Berlin), Tyrtyshnikov (Moscow)
wiss. Mitarbeiter: Rohwedder (Berlin)
Many problems from modern physics and other fields are posed on high-dimensional tensor spaces in a natural way. Numerical approximation of solutions suffers from the curse of dimensionality, i.e. the computational complexity scales exponentially with the dimension of the space. Their numerical treatment therefore always involves methods of approximation by data-sparse representation on tensor spaces. There has very recently been remarkable progress in tensor product approximations. Newly introduced multiscale tensorisation techniques are provided by novel tensor formats. We aim to continue and enhance these developments and to use it for the design and critical assessment of algorithms for the new tensor formats treating high dimensional spectral problems. A particular emphasis is on the systematic examination of the potential which these methods may have in connection with various well-known algorithms used for the treatment of the electronic Schrödinger equation. In particular, our goal is to combine those novel techniques with the requirements and existing wellestablished algorithms for the electronic Schrödinger equation, in particular with one-particle models as the Hartree-Fock and Kohn-Sham method. In this respect, the proposal aims at the continuation of our project within the SPP 1324, concerned with the development of novel, mathematically sound tensor product methods for the numerical treatment of high-dimensional eigenvalue problems.

Antragsteller: Hein (Saarbrücken), Setzer (Saarbrücken)
Linear eigenproblems are the basis of a large number of methods for data analysis in statistics, machine learning, image processing and many other fields. Nonlinear eigenproblems have been studied from a theoretical point of view in nonlinear functional analysis for decades. Nevertheless, they have not become a common tool in data analysis or other domains, despite the fact that nonlinear eigenproblems provide additional modeling power which can be used to enforce stronger properties of the eigenvectors like sparsity or robustness against outliers which is not possible in the linear eigenproblem. The reason seems to be the lack of algorithms for the arising nonconvex and often nonsmooth optimization problems in the computation of nonlinear eigenvectors. Thus, the purpose of this research proposal is to develop a general toolbox of algorithms to compute nonlinear eigenvectors. As today’s problems in data analysis are characterized by both high dimensionality of the feature space and a large number of samples, we place particular emphasis on the development of efficient methods which can cope with such cases. In the second phase of the project we target specific applications of nonlinear eigenproblems in machine learning, statistics and image processing.

Antragsteller: Iske (Hamburg), Plonka-Hoch (Göttingen)
wiss. Mitarbeiter: Heinen (Göttingen), Krause-Solberg (Hamburg)
In the second period of this project we focus on the development and numerical analysis of novel adaptive approximation methods for high-dimensional signal data processing, where our joint research will provide efficient multiscale algorithms relying on modern tools from approximation theory, harmonical analysis, differential geometry, and algebraic topology. Special emphasis is placed on (a) scattered data denoising by using wavelet transforms and on (b) nonlinear dimensionality reduction by manifold learning. In (a), we will generalize our previous results on the Easy Path Wavelet Transform (EPWT), from image data to noisy scattered data taken from high-dimensional signals. To this end, we will develop new denoising methods based on diffusion maps and wavelet transforms along random paths. Moreover, we will extend our previous theoretical results on asymptotic N-term approximations to obtain optimally sparse data representations for piecewise H¨older smooth functions on manifolds by the EPWT. In (b), we will continue our joint research concerning the design and numerical analysis of efficient, robust and reliable nonlinear dimensionality reduction methods, where adaptive multiscale techniques, persistent homology methods, and meshfree kernel-based approximation schemes will play a key role. The new methods will be applied to relevant problems for the separation, classification, and compression of high-dimensional signals.

Antragsteller: Jahnke (Karlsruhe)
Numerical methods for stochastic reaction networks are to be constructed, analyzed and applied. Such networks are described by a Markov jump process on a large and possibly high-dimensional state space. The corresponding time-dependent probability distribution is the solution of the chemical master equation (CME). For its numerical approximation, the main challenge is the curse of dimensionality: The number of degrees of freedom scales exponentially with the number of dimensions, making the application of traditional ODE methods impossible. The bimodality and metastability of many biological systems poses additional difficulties. Numerical methods for both the time-dependent and the stationary CME will be devised. The effects of metastability will be investigated by Transition Path Theory, which allows to compute the transition pathways and rates between metastates. The multi-scale nature of the problem will be exploited to derive hybrid models which reduce the huge number of degrees of freedom significantly by representing only the critical species by a CME and the majority of species by other differential equations.

Antragsteller: Kiesel (Duisburg), Urban (Ulm)
Within the last few years the markets for commodities, in particular energy-related commodities and electricity, have changed substantially. Due to deregulation, energy companies are now allowed to trade not only the commodity electricity, but also various derivatives on electricity on several energy exchanges such as the EEX. In addition, the introduction of a European Union wide emissions trading scheme (the EU-ETS) has introduced carbon related products. Within our project we will address two problems unique to the electricity market. Firstly, we will analyze swing options (which are unique to the energy-related markets): we will develop and analyze Reduced Basis Methods (RBM) to obtain efficient hedging portfolios in terms of forward contracts, which are actually traded on the market. Secondly, we will analyze CO2-permit prices for a multi-period trading scheme which allows banking of permits. Our modeling approach will lead to systems of PDEs which can be treated with the methodology developed in phase one of the current grant scheme.

Antragsteller: Kunis (Osnabrück), Potts (Chemnitz)
wiss. Mitarbeiter: Volkmer (Chemnitz)
A straightforward discretisation of high dimensional problems often leads to a curse of dimensions and thus the use of sparsity has become a popular tool. Efficient algorithms like the fast Fourier transform (FFT) have to be customised to these thinner discretisations and we focus on three major topics regarding the Fourier analysis of high dimensional functions: We will develop stable and e ective spatial discretisations of hyperbolic cross FFTs by the numerical optimisation of rank-1 lattice rules. If the sparsity pattern is unknown, we study greedy methods and Prony type methods for the sparse recovery of signals from samples. Besides a comparison of both approaches, we will study recent deterministic compressed sensing approaches, extend the stable Prony method to complex and closely neighbouring frequencies, and suggest new d-variate generalisations based on the combination of solutions to low dimensional subproblems. Finally, we will proceed in the development of a stable variant of the so-called butter y sparse FFT which is used if the support of the Fourier coefficients as well as the spatial sampling set are subsets of `smooth' curves and surfaces. In summary, we are interested in computational methods for the Fourier analysis of sparse high dimensional signals and continue our research based on results of the first period. All algorithms will be implemented within Matlab and C software packages and we will continue cooperations with application experts within but not restricted to nuclear magnetic resonance spectroscopy, photoacoustic imaging, and magnetic resonance imaging.

Antragsteller: Lubich (Tübingen)
wiss. Mitarbeiter: Brumm (Tübingen)
The project deals with numerical methods for molecular quantum dynamics, both in devising new methods and in the mathematical analysis of known and novel methods. The numerical difficulties lie in the high dimensionality of the underlying Schrödinger equation as the basic model equation as well as in the treatment of high oscillations and multiple scales. The project focuses on time-dependent aspects, complementing approaches to the stationary Schrödinger eigenvalue problem within the Priority Research Programme 1324. The research will concentrate on dynamical low-rank approximations such as the time-dependent multi-configuration Hartree and Hartree-Fock methods on the one hand, and on a novel computational approach to the time-dependent Schrödinger equation in semi-classical scaling (for nuclei in a molecule) based on Hagedorn wavepackets.

Antragsteller: Novak (Jena)
wiss. Mitarbeiter: Rudolf (Jena)
We study fast algorithms for the numerical computation of weighted sums and integrals over domains that are high dimensional or have a complicated structure. We want to tune the algorithms in an optimal way, depending on properties of the integrand, the density, and the domain. Weighted sums and integrals are met in numerous applications, especially in statistical physics, in statistics, in financial mathematics and in computer science. It is amazing that the Metropolis algorithm is one of the most widely used algorithms but, nevertheless, not many error bounds are known. For the applications, stopping criteria as well as the time for warming up are most important. Because of the lack of (nonasymptotic) error bounds, these important parts of the algorithms are usually chosen heuristically. In contrast, we proved explicit error bounds. Now we will study further applications of these bounds as well as modified error bounds and algorithms.

Antragsteller: Rohde (Bochum)
The object are inference and adaptive estimation of functionals of high-dimensional stochastic differential equations (SDEs) under so-called sparsity constraints. In high-dimensional statistical problems (parameter dimension is very large as compared to the sample size), the parameter is poorly estimable. Hence, increasing interest is in functionals about which statistical inference is possible. As a conceptually new aspect, we consider for the mathematical analysis a triangular array of SDEs where the diffusion coefficient is supposed to be a matrix of substantially lower rank than the dimension of the process, which is supposed to grow with the length of the time interval of observations. This is a dimension reduction or sparsity assumption. One typically assumes in nonparametric statistics that the drift of the SDE belongs to some smoothness class, for instance of the Hölder type, which however does not necessarily guarantee that there exists a strong solution of the SDE under consideration. Our goal is to determine the optimal rates of convergence for certain functionals of the drift in dependence of rank and geometry of the diffusion coefficient as well as the construction of fully data-driven estimators. The results are supposed to be extended for discrete-time observations. In high-dimensional problems, algorithmic aspects are of increased importance. In particular, estimation procedures shall be worked out which make our theoretical findings possible.

Antragsteller: Yserentant (Berlin)
wiss. Mitarbeiter: Uschmajew (Berlin)
The project considers the electronic Schrödinger equation of quantum chemistry that describes the motion of N electrons under Coulomb interaction forces in a field of clamped nuclei. Solutions of this equation depend on 3N variables, three spatial dimensions for each electron. Approximating the solutions is thus inordinately challenging. It is conventionally believed that the accuracy cannot be systematically improved without the effort truly exploding for larger numbers of electrons and that a reduction to simplified models, such as those of the Hartree-Fock method or density functional theory, is the only tenable approach for the approximation of the solutions. Results of the applicant indicate that this conventional wisdom need not be ironclad: the regularity of the solutions, which increases with the number of electrons, the decay behavior of their mixed derivatives, and the antisymmetry enforced by the Pauli principle contribute properties that allow these functions to be approximated with an order of complexity which comes arbitrarily close to that of a system of two electrons or even only one electron. Goal of the project is to extend and refine these results and to identify structural properties of the wavefunctions that could ideally enable breaking the curse of dimensionality and to develop the present approximation methods further to true discretications of the Schrödinger equation.

 


 

Abstracts der ersten Phase

Antragsteller: Bender (Saarbrücken), Bollhöfer (Braunschweig)
wiss. Mitarbeiter: Steiner (Saarbrücken)

Backward stochastic differential equations (BSDEs) are a powerful tool to solve problems arising in mathematical finance, e.g. in the pricing of financial derivatives, the hedging of financial risks, and optimal investment problems. Moreover, they yield stochastic representation formulas for semi-linear parabolic Cauchy problems. Therefore the numerical solvability of BSDEs is a problem of high practical relevance, and it is particularly challenging, if a BSDE depends on a high-dimensional system of random sources. In recent years several Monte-Carlo-algorithms for BSDEs based on stochastic meshes or on quantization techniques have been developed. A serious drawback of these algorithms is that they produce point estimators for the solution only, whose quality is not validated. The aim of this project is to add upper (resp. lower) biased terms to these algorithms, which theoretically vanish in the limit. In the practically relevant pre-limit situations the difference between the corresponding ‘upper’ and ‘lower’ solutions may serve as indicator of the success of the numerical procedure. In particular, the absolute size of the biased terms can be monitored in the single discretization steps, which allows for the development of adaptive algorithms that apply more expensive estimators in critical steps. Apart from an error analysis of the additional biased terms for generic estimators of conditional expectations, a more detailed one for least-squares Monte-Carlo estimators is planned.

Antragsteller: Creutzig (Darmstadt), Dereich (Münster), Müller-Gronbach (Passau), Ritter (Kaiserslautern), Scheutzow (Berlin)
wiss. Mitarbeiter: Heidenreich (Kaiserslautern), Schottstedt (Marburg), Yaroslavtseva (Passau)

Different views on quadrature problems for SDEs lead to rather different algorithmic approaches, e.g., PDE methods based on the Fokker-Planck equation, deterministic or randomized high-dimensional numerical integration for explicitly solvable SDEs, and Monte Carlo simulation of SDEs. In this project we employ the concept of approximation of probability distributions as the basis for quadrature of SDEs, both, for constructing new deterministic and randomized algorithms, as well as for establishing optimality results. Our work programme consists of the following three packages:

  • (A) Constructive Quantization of SDEs
  • (B) Quantization and Random Sampling of Multiple Itô Integrals
  • (C) Nonlinear Approximation for Lévy-driven SDEs

In (A) we will use quantization to construct quadrature formulas for SDEs. We do not only analyze the quantization error (quadrature error), but we also study the computational cost for construction of almost optimal quantizations (quadrature formulas). This issue is particularly relevant in the context of SDEs, since here the distributions are only given implicitly. In (B) quantization will be studied for multiple Itô integrals. We intend to build up data bases via quantization and to develop randomization techniques that offer a new way to efficiently use higher-order Itô Taylor schemes in multilevel Monte Carlo methods for quadrature of SDEs. In (C) we will use nonlinear approximations for Lévy processes and scalar SDEs driven by it, to develop efficient multilevel Monte Carlo methods for quadrature of Lévy-driven SDEs.

Antragsteller: Dahlke (Marburg)

Antragsteller: Dahlke (Marburg), Maaß (Bremen), Stevenson (Amsterdam)
wiss. Mitarbeiter: Friedrich (Marburg), Ressel (Bremen)

The aim of this project is the development of optimal convergent adaptive wavelet schemes for complex systems. Especially, we shall be concerned with (nonlinear) elliptic and parabolic operator equations on nontrivial domains as well as with the related inverse parameter identification problems. As always, the construction of suitable wavelet bases on these domains is a principal problem. In this project, we shall use variants of adaptive frame schemes in combination with domain decomposition ideas. Another bottleneck is the curse of dimensionality. We attack this problem by means of tensor product approximation techniques that realize dimension independent convergence rates. Our findings will be applied to well-posed elliptic and parabolic operator equations. The resulting adaptive forward solvers will be combined with regularization techniques for the related inverse parameter identification problem. We aim at analyzing and developing Tikhonovregularization schemes with sparsity constraints for such nonlinear inverse problems. As a model problem we will study the parameter identification problem for a parabolic reactiondiffusion system which describes the gene concentrations in embryos at an early state of development (embryogenesis).

Antragsteller: Dahlke (Marburg), Ritter (Kaiserslautern), Schilling (Dresden)
wiss. Mitarbeiter: Döhring (Kaiserslautern), Kinzel (Marburg), Lindner (Dresden)

This project is concerned with the numerical treatment of complex stochastic dynamical systems which are described by stochastic partial differential equations of parabolic type on piecewise smooth domains. These equations are driven by a (cylindrical) Wiener process W and may be interpreted as abstract Cauchy problems in a suitable function space. We study the pathwise approximation of the solution process, and the aim of our joint research project of numerical analysts and probabilists is to derive a fully adaptive numerical scheme in time and space. By using a variant of the Rothe method, we intend to use a semi-implicit time discretization scheme involving a suitable step-size control. Then, in each time step, an elliptic subproblem has to be solved. To this end, suitable variants of recently developed adaptive wavelet/frame schemes will be employed. Moreover, alternative noise representations based on biorthogonal wavelet or bi-frame wavelet expansions will be derived. We intend to establish the optimal convergence of the scheme, and we also want to ensure that adaptivity really pays for these problems. Therefore our investigations will be accompanied by regularity estimates for the exact solutions in specific scales of Besov spaces. Another central goal is the implementation and testing of the resulting algorithms. This part will be based on the Marburg software library.

Antragsteller: Dahmen (Aachen), Kutyniok (Berlin), Schwab (Zürich)
wiss. Mitarbeiter: Huang (Aachen), Lim (Berlin), Welper (Aachen)

Many important multivariate problem classes are governed by anisotropic features such as singularities concentrated on lower dimensional embedded manifolds. Examples are digital images as well as solutions of hyperbolic conservation laws or more generally of transport dominated equations. The problem of efficiently encoding signals with anisotropic features has started interesting developments during the past few years culminating in new directional representation systems like curvelets and more recently shearlets. The latter concept has significant advantages over the former one in that it offers a unified treatment of both the continuous and digital world. Although these developments for explicitly given signals are far from complete their understanding has by far more matured than analogous considerations for implicitly given objects like solutions to operator equations of the above type. The central objective of this project is therefore the development and understanding of new discretization concepts that are able to economically and reliably capture anisotropic phenomena in solutions governed by anisotropic phenomena.

Antragsteller: Ernst (Freiberg), Starkloff (Zwickau)
wiss. Mitarbeiter: Mugler (Zwickau), Ullmann (Freiberg)

Differential equations with random data have attained enormous popularity in the engineering community as an uncertainty quantification technique, in which variability and lack of information on the data of a differential equation are modeled by stochastic processes displaying significant correlations. This distinguishes this class of problems from what are usually referred to as stochastic ODEs or PDEs. Stochastic Galerkin (SG) or Stochastic Finite Elements Methods (SFEM) are routinely used to approximate the solutions of such random differential equations, and much progress in their mathematical analysis and computational implementation has been made since the late 1990s. The proposed project intends to make a contribution in three aspects of SG methods:

(1) A deeper analysis of the mathematical foundations of the representation of stochastic processes by polynomial chaos expansions, particularly what are known as generalized polynomial chaos expansions. The latter generalize classical results of Wiener and Cameron/Martin—stating that any functional of the Wiener process may be expanded in a Hermite series in a sequence of independent Gaussian random variables—to expansions in other orthogonal polynomials in random variables with non-Gaussian distributions. Here we have shown in preliminary results that these “analogues” of the classical results to not hold without additional assumptions. Moreover, we have also been able to show that polynomial chaos expansions can be based on a single random variable rather than a sequence, with the incorporation of certain measurable functions. We propose to analyse whether this observation can help to reduce the number of independent random variables that must be retained in SG computations to obtain a satisfactory approximation and yet still yield a practical method.

(2) A systematic investigation of input random field models and the extraction of statistical data from the output (solution) random fields of SG computations. For the input random fields issues are ensuring well-posedness of the continuous and discrete operators, the justification of additional independence assumptions on the basic random variables used to represent the input random fields, and the sensitvity of the SG results with respect to assumptions made on statistical properties of the input fields such as correlation structure, marginals, and various parameters. We further propose to investigate the numerical representation of non-homogeneous random fields, in particular fields displaying different statistical properties in different subdomains of the region of interest. Regarding the output random fields, we will investigate numerical techniques for extracting statistical quantities such as moments, distribution functions, confidence surfaces and the probabilities of given events for a meaningful interpretation of the results.

(3) Since SG discretizations yield extremely large linear systems of equations, their efficient solution remains a major bottleneck in their computational realization. Multilevel methods, known for their optimal complexity for a large class of (deterministic) boundary value problems, have been applied to SG discretizations only for the deterministic parts of the operators. To reduce the overall complexity, however, requires applying the multilevel iteration also to the stochastic variables. We propose to investigate to what extent modern multilevel subspace correction methods can be applied to the stochastic discretization and, if not, identify the properties which preclude this.

Antragsteller: Garcke (Bonn)
wiss. Mitarbeiter: Klompmaker (Berlin)

Reinforcement learning is an aspect of machine learning where representations of functions over a high-dimensional space are necessary. The underlying problem is that of an agent that must learn behaviour through trial-and-error interactions with a dynamic environment which is described by a state space. This project is concerned with the discretisation necessary in the case of a continuous state space. One uses dynamic programming methods to (numerically) estimate the utility of taking actions in states of the world; this gives rise to functions over the state space. In the closely related field of optimal control discretisation techniques using finite-element or finitedifference methods are widely used. Since the number of dimensions can be large one runs into the curse of dimensionality. We propose to investigate two modern numerical methods for function representations in high dimensions, sparse grids and sums of separable function, in this context. They will allow to break the curse of dimensionality to a certain extent. This project aims to make significant progress for function approximation in continuous state spaces of up to, say, 15 dimensions using an adaptive sparse grid approach.

Antragsteller: Grasedyck (Aachen)

The aim of this research proposal is to derive a novel approximate arithmetic for computations with high-dimensional data in data-sparse form. The new data-sparse form is a dimension-hierarchical model for the representation of high-dimensional tensors or multivariate functions, where the basic building block is the Tucker model. The new model is linear scaling in the dimension, like the PARAFAC model, but it is based on a hierarchy of stable decompositions, like the Tucker model, so that this new model combines the advantages of both of them favourably.

In the first part of the project we develop a general scheme for the approximation of a given (arbitrary) function in the PARAFAC and in the new hierarchical model. Here, a black-box type approximation shall be developed. Also conversions from one of the mentioned standard models into the new one are planned. In the second part, for the use of basic arithmetic operations (linear combinations, iterative solvers), an approximate arithmetic in the new format shall be developed.

Antragsteller: Griebel (Bonn)

In many areas of science, it can be observed that problems of high dimensions possess only low effective dimensionality. This may be exploited to circumvent the curse of dimensionality of a numerical approximation. The main problem, however, is to find a proper coordinate system and the best associated effective dimension. This is especially difficult if one allows nonlinear intrinsic coordinates which however often allow for a substantially better dimension reduction than a linear approach by the PCA. In this project, we use the approach of regularized principal manifolds to compute such coordinates. Since the remaining dimensions may nevertheless be more than three, we employ sparse grids for the representation of the coordinate functions of the manifold. This way, the numerical complexity is further reduced and problems with up to 20 intrinsic dimensions may be handled efficiently. We will derive the effective dimension and the intrinsic coordinates automatically by a dimension-adaptive sparse grid method using dimension sensitive error indicators.

Antragsteller: Hackbusch (Leipzig), Schneider (Berlin)

Many challenging problems of numerical computations arise from problems involving a high spatial dimension. For a fine grid resolution even 3 dimensions cause a problem, but 6 or even much higher dimensions require quite new methods, since the standard approaches have a computational complexity growing exponentially in the dimension (“curse of dimensionality”).

A remedy is the use of data-sparse matrix or corresponding constructions exploiting tensor product representations. Here, we focus on eigenvalues problems in this field. While the design of the algorithms is rather general, the main application are problems from electronic structure calculations. Many of the developed methods may be applied to general problems stemming from elliptic differential or integral operators. In particular, the basic electronic Schrödinger equation is an eigenvalue problem for an elliptic 2nd order partial differential equation in high dimensions. Alternatively to a direct treatment of this original problem we would like to exploit successful developments in quantum chemistry, mainly putting newly developed methods on top of well established electronic structure programs. A major focus will be on eigenvalue problems in Density Functional Methods. Perhaps there are further instances where the development of the project would contribute to numerical methods in electronic structure calculation, e.g., adaptive CI and CC methods and Jastrow factor calculation.

Antragsteller: Holtz (Berlin)
wiss. Mitarbeiter: Jokar (Berlin)

1. Exploring sufficient conditions for sparse recovery. The restricted isometry property guarantees good performance of compressive sensing matrices. However, it is too strong and hard to verify in practice. We suggest a detailed investigation of other sufficient conditions, e.g., the null space property.

2. Deterministic construction of compressed sensing matrices. We propose a systematic study of compressive properties of various structured matrices such as Toeplitz, cyclic, generalized Vandermonde, 0-1 matrices, etc. Methods from graph theory are suggested to analysis matrices with underlying graph structure, and specific structured matrix techniques are to be applied to matrices with low displacement rank and other known structures. This systematic study should lead to a deterministic construction of optimally performing compressive sensing matrices.

3. Applications of frame theory to compressive sensing and vice versa. We propose to apply com- pressive sensing techniques to the Feichtinger and the Paving conjectures. Another research task is to fully understand the existence and compressive properties of equiangular tight frames.

4. Applications of compressive sensing techniques to PDEs. A specific scheme is proposed for studying discretizations of boundary value problems for elliptic PDEs, which is based on compressive sensing ideas. We intend to develop and analyze this scheme in detail, producing a new PDE solver.

Antragsteller: Iske (Hamburg), Plonka-Hoch (Göttingen)
wiss. Mitarbeiter: Guillemard (Hamburg), Tenorth (Duisburg)

The goal of the project is the development and numerical analysis of new adaptive approximation algorithms for sparse data representation in signal and image processing. To this end, modern tools from approximation theory, harmonical analysis and nonlinear PDEs are employed to design efficient multiscale methods for the compression, filtering and denoising of multi-dimensional signal data. The resulting numerical algorithms rely on local adaption to the geometry and the regularity of the underlying signals, yielding highly nonlinear approximation schemes. The different techniques utilized include variational methods, anisotropic diffusionreaction equations, dimensional reduction through manifold learning, reproducing kernels, multiscale frames, meshfree particle methods, and greedy algorithms for efficient data reduction. Special emphasis is placed on real-world applications to medical imaging, where new multiscale methods for compression, denoising and texture separation of medical image data are developed. Another challenging application in neuro and bioscience concerns the analysis of electromyogram (EMG) data, where kernel-based methods for dimensional reduction through manifold learning are used.

Antragsteller: Jahnke (Karlsruhe), Schütte (Berlin)

Numerical methods for stochastic reaction networks are to be constructed, analysed and applied. Such networks are described by a Markov jump process on a large and possibly high-dimensional state space. The corresponding time-dependent probability distribution is the solution of the chemical master equation (CME). For its numerical approximation, the main challenge is the large number of degrees of freedom which makes the application of traditional ODE methods impossible. The bimodality and metastability of many biological systems poses additional difficulties. Numerical methods for both the time-dependent and the stationary CME will be devised. The effects of metastability will be investigated by Transition Path Theory, which allows to compute the transition pathways and rates between metastates. The notorious curse of dimensionality will be avoided by hybrid models and/or by exploiting recent results from the field of Compressed Sensing.

Antragsteller: Kunis (Osnabrück), Potts (Chemnitz)
wiss. Mitarbeiter: Kämmerer (Chemnitz)

A straightforward discretisation of high dimensional problems often leads to a curse of dimensions and thus the use of sparsity has become a popular tool. In the Fourier analysis of signals two approaches have been studied over the last years: the fast Fourier transform on hyperbolic crosses and the use of ideas from Compressed Sensing.

Sparse grids and the approximation on hyperbolic crosses have led to algorithms that scale almost linear in N. We intend to generalise the hyperbolic cross FFT for nonequispaced sampling and for more general sparsity patterns including the reconstruction from samples at scattered nodes. If the sparsity pattern is unknown, we study greedy methods for the sparse recovery of signals from samples. Of course, such Compressed Sensing methods take advantage of the above FFTs, e.g., for hyperbolic crosses. We analyse their recovery rates and arithmetic complexities and aim to exploit aliasing and adaptive sampling within sparse recovery schemes.

In summary, we are interested in computational methods for the Fourier analysis of sparse signals. All algorithms will be implemented within our software package and we intend to consider applications within but not restricted to magnetic resonance imaging.

Antragsteller: Lorenz (Braunschweig), Teschke (Neubrandenburg)
wiss. Mitarbeiter: Herrholz (Neubrandenburg)

This project aims at a thorough theory of compressed sensing for ill-posed problems and is hence located at the intersection of signal processing and ill-posed problems. Compressed sensing is a promising new field which tries to tackle the problem of high-dimensional data by combining the measuring and the compression step into one single process of “compressive sampling”. The theory is well developed for well-posed finite dimensional linear problems. The main points to be addressed in this project are a proper formulation in infinite dimensional spaces and the treatment of ill-posed operators (e.g. compact operators)– both linear and non-linear.

In the first phase we focus on the formulation of compressed sensing for linear ill-posed operators and on the algorithmic treatment of the arising l1 constrained minimization problems. For the formulation of compressed sensing of ill-posed problems we start with investigating the well-posed infinite dimensional case and analyze proper measurement matrices. The generalization to ill-posed problems is the next step. The main tool for an efficient and reliable algorithm will be the semismooth Newton approach in combination with a proper globalization technique.

While the first phase of the project is focused on the development of methods, the second phase will include applications of the areas medical imaging, geo-data analyis, and color image restauration.

Antragsteller: Lubich (Tübingen)

The project deals with numerical methods for molecular quantum dynamics, both in devising new methods and in the mathematical analysis of known and novel methods. The numerical difficulties lie in the high dimensionality of the underlying Schrödinger equation as the basic model equation as well as in the treatment of high oscillations and multiple scales. The project focuses on time-dependent aspects, complementing approaches to the stationary Schrödinger eigenvalue problem within the Priority Research Programme 1324. The research will concentrate on dynamical low-rank approximations such as the time-dependent multi-configuration Hartree and Hartree-Fock methods on the one hand, and on a novel computational approach to the time-dependent Schrödinger equation in semiclassical scaling (for nuclei in a molecule) based on Hagedorn wave packets.

Antragsteller: Novak (Jena)
wiss. Mitarbeiter: Rudolf (Jena)

We study fast algorithms for the numerical computation of weighted integrals over domains that are high dimensional. We want to tune the algorithms in an optimal way, depending on properties of the integrand, the density, and the domain. High dimensional integrals with a density function are met in numerous applications, especially in statistical physics, in statistics, and in financial mathematics.

It is amazing that the Metropolis algorithm is one of the most widely used algorithms but, nevertheless, not many error bounds are known. For the applications, stopping criteria as well as the time for warming up are most important. Because of the lack of (nonasymptotic) error bounds, these important parts of the algorithms are usually chosen heuristically.

In contrast, we want to prove error bounds, in particular nonasymptotic error bounds without hidden (unknown) constants. We also want to construct new rapidly mixing Markov chain Monte Carlo methods for suitable classes of integrands, densities, and domains.

Antragsteller: Sickel (Jena)
wiss. Mitarbeiter: Hansen (Zürich)

Our aim consists in the investigation of linear and nonlinear methods of approximation of tensor products of linear operators A1, . . . ,AN, N ≥ 2. Therefore we need to study also tensor products of classical function and distribution spaces, in particular of Sobolev as well as Besov spaces.

Antragsteller: Kiesel (Duisburg), Urban (Ulm)
wiss. Mitarbeiter: Rupp (Ulm)

In recent years financial market required the introduction of increasingly complex financial structures. Up to now the risk management and valuation of these structures is not sufficiently understood as the current credit crisis shows. Subject of our project is the development of efficient, reliable and fast valuation methods for so-called Collateralized Debt Obligations (CDOs) and exotic derivatives on tranches of CDOs.

Our strategy will be based on Partial Differential Equations (PDEs) using an adaptive wavelet method for valuation. The equations to be considered will be Partial Integro-Differential Equations (PIDE). Exotic options lead to obstacle problems which we plan to treat by means of an adaptive projection scheme. Moreover, we investigate parameter dependencies and sensitivities both analytical (if possible) and by Automatic Differentiation.

Finally, we compare this approach with (quasi-)Monte Carlo models in order to investigate the performance of PDE-based models.

Antragsteller: Yserentant (Berlin)
wiss. Mitarbeiter: Zeiser (Berlin)

The project considers the electronic Schrödinger equation of quantum chemistry that describes the motion of N electrons under Coulomb interaction forces in a field of clamped nuclei. Solutions of this equation depend on 3N variables, three spatial dimensions for each electron. Approximating the solutions is thus inordinately challenging, and it is conventionally believed that a reduction to simplified models, such as those of the Hartree-Fock method or density functional theory, is the only tenable approach. Recent results of the applicant indicate that this conventional wisdom need not be ironclad: the regularity of the solutions, which increases with the number of electrons, the decay behavior of their mixed derivatives, and the antisymmetry enforced by the Pauli principle contribute properties that allow these functions to be approximated with an order of complexity which comes arbitrarily close to that of a system of one or two electrons, depending on the spin configuration. Goal of the project is to extend these results and to identify structural properties of the wavefunctions that will ideally enable breaking the curse of dimensionality and to develop the present approximation methods further to true discretizations of the Schrödinger equation.

Termine:

| iCal
Valid XHTML 1.0 Strict CSS ist valide!