CSSIP10

Workshop on Compressed Sensing,
Sparsity and Inverse Problems

September 6-7, 2010
TU Braunschweig
 

PROGRAM


The following to experts have agreed to give talks
A detailed program will be published timely.



List of talks:
show abstracts?
(ordered by name)
  1. "Variational Inequalities and Morozov\'s Discrepancy Principle"
    Anzengruber, Stephan
    Variational Inequalities can be viewed as generalizations of classical source conditions and structural assumptions. In this talk we will show how, for non-linear ill-posed problems, Variational Inequalities can be combined with Morozov\'s Discrepancy principle to obtain convergence rates for Tikhonov Regularization with $\ell_p$ penalties, $p \geq 1$, which are well-known to promote sparsity.

  2. "Non-uniform sparse recovery with Gaussian matrices"
    Ayaz, Ulas
    Compressive sensing predicts that sufficiently sparse vectors can be recovered from highly incomplete information. Efficient recovery methods such as L1-minimization find the sparsest solution to certain systems of equations. Random matrices have become a popular choice for the measurement matrix. Indeed, near-optimal uniform recovery results have been shown for such matrices. In this note we focus on nonuniform recovery using Gaussian random matrices and `1-minimization. We provide a condition on the number of samples in terms of the sparsity and the signal length which guarantees that a fixed sparse signal can be recovered with a random draw of the matrix using L1- minimization. The constant 2 in the condition is optimal, and the proof is rather short compared to a similar result due to Donoho and Tanner. This is joint work with Holger Rauhut.

  3. "Regularization of linear integral equations with noisy data and noisy operator"
    Bleyer, Ismael Rodrigo
    Regularization methods for linear ill-posed problems $ Kf = g $ have been extensively investigated when there exists noisy measurements $g^{\delta}$ for the true data $g$. However, often also the operator is not known exactly. A common way to solve this problem is to use the regularized total least squares method. In our approach, we consider linear integral equations where we assume that both the kernel and the data are contaminated with noise, and use Tikhonov regularization for a stable reconstruction of both the kernel and the solution. Finally, we discuss computational aspects and develop an iterative shrinkage-thresholding algorithm for the reconstruction, and provide first numerical results.

  4. "Multilevel Preconditioning and Adaptive Sparse Solution of Inverse Problems"
    Dahlke, Stephan
    Multilevel Preconditioning and Adaptive Sparse Solution of Inverse Problems We are concerned with the efficient numerical solutaion of minimization problems in Hilbert spaces involving sparsity constraints. These optimization problems arise, e.g., in the context of inverse problems. We analyze a very efficient variant of the well-known iterative soft shrinkage algorithm for large or even infinite dimesnional problems. This algorithm is modified in the following way. Instead of prescribing a fixed threshold parameter, we use a decreasing thresholding strategy. Moreover, we use suitable adaptive schemes for the approximation of the infinite matrix-vector products. We also derive a block multiscale preconditioning technique which allows for local well-conditioning of the underlying matrices and for extending the concept of restricted isometry property to infinitely labelled matrices. The careful combination of these ingredients gives rise to numerical schemes that are guaranteed to cnverge with esponential rate, and which allow for the controlled inflation of the support size of the iterations. We also present numerical experiments that confirm the applicability of our approach. (Joint work with M. Fornasier and T. Raasch)

  5. "Linear Convergence Rates of l1-Regularization"
    Haltmeier, Markus
    In this talk we study the stable solution of ill-posed equations
    Ax = y
    where A : XY is a linear operator between Hilbert spaces X and Y . If this equation is ill-posed, then the solution does not depend continuously on the data y. In order to stably solve the equation, stabilization methods have to be applied. The most widely used stabilization technique is Tikhonov regularization
    Τy(x) := ||Ax-y||2 + αR(x) → min.
    Here R is a penalty functional and α > 0 is the regularization parameter.
    In the recent years - motivated by the success of compresses sensing - the use of l1 penalty
    R(x) = ||x||l1 := λ ∈ Λ|<φλ,x>| 
    became very popular.(Here (φλ)λ ∈ Λ is some basis of X.) In this talk we study Tikhonov regularization with l1 penalty term. In particular, we show that the range condition (ran(A*) ∩ R(x+) ≠ ∅) together with a certain injectivity condition allows linear error estimates (convergence rates) between the solution x+ and minimizers of Τα,y. Moreover, we show that the range condition is even necessary for this linear convergence rate. This is talk is based on joint work with M. Grasmair and O. Scherzer.

    • M. Grasmair, M. Haltmeier, and O. Scherzer. Necessary and sufficient conditions for linear convergence of l1-regularization. Reports of FSP S105 - \"Photoacoustic Imaging\" 18, University of Innsbruck, Austria, 2009.


  6. "Compressed Sensing in Infinite Dimensions"
    Hansen, Anders C.
    Compressed sensing is a great tool for solving inverse problems and it has a wide range of applications in signal processing and medical imaging. However, the current theory covers only problems in finite dimensions, and thus there are plenty of infinite dimensional inverse problems that are not covered by the existing results. In this talk I will show how the finite dimensional theory can be extended to include problems in infinite dimensions. This allows for recovery of much more general objects such as analog signals and infinite resolution images. The tools required come from probability, operator theory, optimization and geometry of Banach spaces. I\'ll give an introduction to what is already known (accompanied by numerical examples) and discuss some of the open questions.

  7. "Compressed sensing for ill-posed problems: recovery principles and accuracy estimates"
    Herrholz, Evelyn
    This talk is concerned with compressive sampling strategies and sparse recovery principles for linear and ill-posed problems. We provide compressed measurement models and accuracy estimates for sparse approximations of the solution of the underlying inverse problem. Thereby, we rely on Tikhonov variational and constraint optimization formulations. One essential difference to the classical compressed sensing framework is the incorporation of a joint sparsity measure allowing the treatment of infinite dimensional reconstruction spaces. The theoretical results are furnished with a numerical experiment.

  8. "New and improved Johnson-Lindenstrauss embeddings via the Restricted Isometry Property"
    Krahmer, Felix
    The Johnson-Lindenstrauss (JL) Lemma states that any set of p points in high dimensional Euclidean space can be embedded into O(&delta ^{-2} log(p)) dimensions, without distorting the distance between any two points by more than a factor between 1 - &delta and 1 + &delta . We establish a new connection between the JL Lemma and the Restricted Isometry Property (RIP), a well-known concept in the theory of sparse recovery often used for showing the success of l_1-minimization. Consider an m x N matrix satisfying the (k, &delta _k)-RIP with randomized column signs and an arbitrary set E of O(e^k) points in R^N. We show that with high probability, such a matrix with randomized column signs maps E into R^m without distorting the distance between any two points by more than a factor of 1 +/- 4 &delta _k. Consequently, matrices satisfying the Restricted Isometry of optimal order provide optimal Johnson-Lindenstrauss embeddings up to a logarithmic factor in N. Moreover, our results yield the best known bounds on the necessary embedding dimension m for a wide class of structured random matrices. In particular, for partial Fourier and partial Hadamard matrices, our method optimizes the dependence of m on the distortion &delta : We improve the recent bound m = O(&delta ^{-4} log(p) log^4(N)) of Ailon and Liberty (2010) to m = O(&delta ^{-2} log(p) log^4(N)), which is optimal up to the logarithmic factors in N. Our results also yield new Johnson-Lindenstrauss bounds for several deterministic matrices with randomized column signs. This is joint work with Rachel Ward.

  9. "Efficient Calculation of Sparse Solutions for Inverse Problems"
    Lakhal, Aref
    We present an efficient method to regularize ill-posed problems with sparsity as prior knowledge. We combine mollification methods with standard techniques of optimization to derive an efficient algorithm for solving inverse problems with L1- constrains. Numerical tests illustrate the robustness of the algorithm.

  10. "Exact recovery for linear ill-posed problems with l^1 regularization"
    Lorenz, Dirk
    Ill-posed problems are problems in which the solution does not depend continuously on the data. In this talk we consider Tikhonov regularization of such problems, and especially focus on sparse regularization. By combining techniques from ill-posed problems and sparse recovery we show, that it it is possible to reconstruct the exact support of the unknown solution even for ill-posed problems. We illustrate our results on the ill-posed problem of digital particle holography.

  11. "Generalization of the Neural Gas for Learning Sparse Codes"
    Martinetz, Thomas
    The Neural Gas is a very simple but efficient algorithm for vector quantization which has found widespread applications. We show how it can be extended to learning linear subspaces and be cast into the framework of sparse coding. It compares favorably to other approaches like k-SVD and MOD.

  12. "Exact Sparse Solutions of Underdetermined Linear Equations"
    Pfetsch, Marc
    In this talk I will review combinatorial optimization techniques to find a sparsest solution of underdetermined linear equations. The corresponding exact methods are based on branch-and-cut. Not surprisingly, these techniques are computationally much more involved than heuristics such as basis pursuit. The solutions of this and other heuristics will be compared to the exact values. I will try to show the limits and possibilities of the currently available exact approaches.

  13. "Compressive Sensing and Function Recovery in High Dimensions"
    Rauhut, Holger
    The talk reviews some basics on compressive sensing, in particular, on recovery of functions that are sparse in bounded orthonormal systems (e.g. the trigonometric system) from randomly chosen samples. We apply then compressive sensing techniques for the recovery of functions in high dimensions.

  14. "Iterative sparse recovery and regularization theory"
    Teschke, Gerd
    We shall be concerned with sparse recovery algorithms for inverse and ill-posed problems. In particular, we consider a projected steepest descent method, its stabilization and corresponding regularization results.

  15. "An Infeasible-Point Subgradient Method and Computational Comparison for lš-Minimization"
    Tillmann, Andreas
    The lš-minimization problem min{ ||x||_1 : Ax=b } , also known as Basis Pursuit (BP), has become important in Compressed Sensing due to its ability to sometimes yield the sparsest solution of the underdetermined system Ax=b. In the past few years, a lot of new algorithms solving (BP) or some (possibly regularized) variant of it have been developed. We introduce a subgradient scheme which -- unlike typical projected subgradient methods -- is not required to maintain feasibility of the iterates throughout the algorithmic progress. Convergence of this method (ISML1) to an optimal feasible point for (BP) can be proven under certain reasonable assumptions. In the second part of the talk we will present results of a computational comparison of various state-of-the-art lš-solvers and our method. We also show how a new optimality check can speed up these solvers and at the same time dramatically improve the solution quality. (Joint work with D. Lorenz and M. Pfetsch)

  16. "Quality assessment of the l1 penalization method for sparse recovery"
    Verhoeven, Caroline
    The l1 penalization method is often used for sparse recovery in (linear) inverse problems. Most research on the effectiveness of this method focuses on measurement matrices with mutual incoherent columns or which satisfy the restricted isometry property. We will study matrices which do not satisfy these properties. Some typical recovery errors are computed in order to show how they are influenced by the sparsity of the desired solution, the noise level in the data, the number of data and the singular value spectrum of the matrix. These results are compared to known theoretical bounds. We also illustrate the ability of this method to predict relative errors on a magnetic tomography example. (joint work with I. Loris)

  17. "Regularization results for inverse problems with sparsity functional"
    Wachsmuth, Gerd
    We consider an optimization problem of the type \begin{equation} \left\{ \begin{aligned} \text{Minimize}\quad & F(u) = \frac12 \| \mathcal{S} u - z_\delta \|_{H}^2 + \alpha \, \| u \|_{L^2(\Omega)}^2 + \beta \, \| u \|_{L^1(\Omega)} \\ \text{such that}\quad & u \in U_\textup{ad} \subset L^2(\Omega). \end{aligned} \right.\tag{$\textbf{P}_{\alpha,\delta}$} \label{eq:P} \end{equation} Here, $\Omega \subset \mathbb{R}^n$ is a bounded domain, $H$ is some Hilbert space, $\mathcal{S} \in \mathcal{L}(L^2(\Omega), H)$ compact (e.g.\ the solution operator of an elliptic partial differential equation), $\alpha > 0$, and $\delta, \beta \ge 0$. The problem~\eqref{eq:P} can be interpreted as an inverse problem as well as an optimal control problem. Let us denote the solution with $u_{\alpha,\delta}$. The estimate $\| u_{\alpha,0} - u_{\alpha,\delta}\|_{L^2(\Omega)} \le \delta \, \alpha^{-1/2}$ for the error due to the noise level $\delta$ is well known for $\beta = 0$ and the proof can be extended to the case $\beta > 0$. A typical way to estimate the regularization error as $\alpha \searrow 0$ is via a source condition, e.g.\ $u_{0,0} = \mathcal{S}^\star w$ with some $w \in H$. This yields $\| u_{\alpha,0} - u_{0,0} \|_{L^2(\Omega)} \le C \, \alpha^{1/2}$. But if pointwise constraints are present ($U_\textup{ad} = \{ u \in L^2(\Omega) : u_a \le u \le u_b \}$), $u_{0,0}$ often is bang-bang, i.e.\ $u_{0,0}(x) \in \{u_a, 0, u_b\}$ a.e.\ in $\Omega$. Hence, $u_{0,0} \\notin H^1(\Omega)$ and by $\operatorname{range}(\mathcal{S}^\star) \subset H^1(\Omega)$ a source condition with $\mathcal{S^\star}$ can not hold. In this talk we present a new technique for deriving rates of the regularization error using a combination of a source condition and a regularity assumption on the adjoint variable $p_{0,0} = \mathcal{S}^\star(z_0 - \mathcal{S} u_{0,0})$. If the measure of $\big\{ \big| |p_{0,0}| - \beta \big| \le \varepsilon \big\} \le C \, \varepsilon$ for all $\varepsilon \ge 0$ it is possible to show $\| u_{\alpha,0} - u_{0,0} \|_{L^2(\Omega)} \le C \, \alpha^{1/2}$ without a source condition. We present examples showing that the error rates are sharp.

  18. "Generalized Tikhonov regularization with Bregman discrepany"
    Worliczek, Nadja
    This talk will deal with generalized Tikhonov regularization with Bregman distances as discrepany term. We give a short motivation for using them by means of statistical inversion theory where maximum a posteriori estimators of the solution of inverse problems are often gained by minimizing such functionals. Furthermore we will examine some basic properties of these sort of Tikhonov functionals.

  19. "Truncated Soft Shrinkage Iteration with Discrepancy Based Stopping Rule"
    Zhariy, Mariya



designed by -hSc-