Much of modern data processing requires identification of low-dimensional structures in high-dimensional spaces, using observations that are incomplete or noisy. This general paradigm applies to the restoration of images (where natural images form a low-dimensional subset of the space of all possible images), compressed sensing (where the signal can be represented in terms of just a few elements of an appropriate basis), regularized regression (where we seek to explain observations in terms of just a few predictive variables), matrix completion (where we seek a low-rank matrix that fits partial information about the matrix), and so on.
Sparse optimization provides valuable tools for formulating and solving problems of this type. A key concept is regularization, whereby we introduce functions into the optimization formulation that induce the required type of structure in the solutions. In the simplest case, the 1-norm of a vector x is used to derive solutions in which x is sparse, that is, it contains relatively few nonzero components. Often (not always) the regularized formulations are convex but nonsmooth. Customized optimization algorithms are required to handle the large data size and dimension. This talks will survey the scope of applications of sparse optimization in data processing, and then describe the formulation techniques and algorithms that are being used to solve these problems.