Sparse models assume minimal prior knowledge about the data, asserting that the signal has many coefficients close or equal to zero when represented in a given domain. From a data modeling point of view, sparsity can be seen as a form of regularization, that is, as a device to restrict or control the set of coefficient values which are allowed in the model to produce an estimate of the data. In this way, flexibility of the model (that is, the ability of a model to fit given data) is reduced, and robustness is gained by ruling out unrealistic estimates of the coefficients.
Implicitly, standard sparse models give the same relevance to all of the very large number of subsets of sparse nonzero coefficients (a number which grows exponentially with the number of atoms in the dictionary). This assumption can be easily proved false in many practical cases. Signals have in general a richer underlying structure that is merely disregarded by the model. In many situations, standard sparse models represent a very good trade off between model simplicity and accuracy. However, many practical situations could greatly benefit from exploiting the structure present in the data, potentially for interpretability purposes, improve performance and faster processing.
The main goal of this thesis is to explore different ways of including data structure into sparse models and to evaluate them in real image and signal processing applications. The main directions of research are: (i) extending sparse models through imposing structure in the sparsity patterns of non-zero coefficients in order to stabilize the estimation and account for valuable prior knowledge of the signals; (ii) analyzing how this impacts in challenging real applications where the problem of estimating the model coefficients is very ill-posed. As a fundamental example, the problem of monaural source separation will be extensively evaluated throughout the thesis; (iii) studying ways of exploiting the underlying structure of the data in order to speed up the coding process. One of the most important challenges in sparse modeling is the relatively high computational complexity of the inference algorithms, which is of critical importance when dealing with very large scale (modern-size) applications as well as real-time processing.