Software & Hardware
Software
Sparsity Constrained Nonlinear Optimization: Optimality Conditions and Algorithms
A. Beck and Yonina C. Eldar
Introduction
Sparsity has long been exploited in signal processing, applied mathematics, statistics and computer science for tasks such as compression, denoising, model selection, image processing and more.
Here we consider recovering a sparse vector from nonlinear measurements. More generally, we treat the problem
min ƒ(x) s.t. x_{0} ≤ k
where
 ƒ(x) is an arbitrary continuously differentiable function
 Not necessarily convex!
We present and analyze several different optimality criteria which are based on the notions of stationarity and coordinatewise optimality. These conditions are then used to derive three numerical algorithms aimed at finding points satisfying the resulting optimality criteria: the iterative hard thresholding method and the greedy and partial sparsesimplex methods. The first algorithm is essentially a gradient projection method while the remaining two algorithms extend the popular orthogonal matching pursuit algorithm to the nonlinear setting. Theoretical convergence of these techniques and their relations to the derived optimality conditions are studied.
A special case is the Phase retrieval problem in which
where F_{i} is the ith row of the Fourier matrix. For details on this problem see phase retrieval page.
1. The IHT Method
A wellknown solution method is the socalled iterative hardthresholding (IHT) method
whose recursive update formula is
The mfile implementing the IHT method is IHT.m. This method converges to an Lstationary point.
2. The Greedy SparseSimplex Method
The next method is an extension of orthogonal matching pursuit (OMP) to the nonlinear setting. It is shown to converge to a coordinatewise minimia, which is a stronger optimality then Lstationarity. Thus, this approach tends to perform better than IHT and works under more relaxed conditions.
To invoke the greedy sparsesimplex method, two MATLAB functions should be constructed. The first one is the objective function and the second is a function that performs onedimensional optimization with respect to each of the coordinates and outputs the index of the variable causing the largest decrease, its optimal value and the corresponding objective function value. For the least squares problem the MATLAB function is in f_LI.m and the second required function is g_LI.m.
After having these two functions, we can now invoke the main function greedy_sparse_simplex.m.
3. The Partial SparseSimplex Method
The partial sparsesimplex method is implemented in the mfile partial_sparse_simplex.m.
The input arguments are the same as the one of greedy_sparse_simplex expect for one additional input argument which is the gradient function. For the function f_{QU}, the gradient is implemented in the mfile gradient_QU.m.
Reference
Software Download
Installation:
1. Unzip all files to a directory of your choice.
2. Open guide.pdf file and follow the steps.
