Over the past decade, sparsity has become one of the most prevalent themes in signal processing and Big-Data applications. In general, sparsity describes the phenomenon where high-dimensional data can be explained by only a few variables, values, or coefficients. The presence of sparsity often enables efficient algorithms for extracting relevant information from the data. This effort focuses on the theoretical treatment of specialized sensing and inference techniques that exploit sparsity and other forms of structured low-dimensional representations. The first part of this work focuses on noisy matrix estimation and completion problems. We consider the problem of estimating matrices that adhere to a "sparse-factor model" decomposition -- matrices that may be accurately described by a product of two matrices, one of which is sparse -- from noisy observations, where the noise is modeled as random and may arise from any of a number of various likelihood models (e.g., Gaussian, Poisson, Laplace, and even one-bit models). Sparse-factor models can be used to describe collections of vectors that reside in a union of linear subspaces, and can be viewed as a powerful generalization the widely-used principal component analysis technique, which assumes data reside on or near a single subspace. We establish estimation error guarantees for sparse-factor matrix estimation problems (where a noisy observation of each matrix entry is observed) and matrix completion problems (where only a subset of elements is observed, each corrupted by noise), and describe an efficient algorithm for performing inference in problems of this form. HASH(0x2f1988c) In the second part of this work, we examine and quantify the benefits of "adaptive sensing" techniques, which employ data-dependent feedback in the data acquisition process, in the context of a structured sparse inference task. This work is motivated by a desire to formally exploit the structural characteristics and dependencies present in the wavelet representations of many natural images. We devise an efficient and provably optimal (in a minimax sense) adaptive acquisition method for estimating the locations of the significant wavelet coefficients from noisy observations. Our results demonstrate the significant improvements that can be obtained when leveraging the inherent structural dependencies in the sparse representation of the signal to be acquired and incorporating feedback in the measurement process, relative to the best possible methods that utilize either structural information or adaptivity alone. Overall, our results provide essential new insights into the virtues of adaptive data acquisition in sparse inference tasks.
University of Minnesota Ph.D. dissertation. May 2015. Major: Electrical Engineering. Advisor: Jarvis Haupt. 1 computer file (PDF); x, 136 pages.
Structured and Sparse Signal Estimation - Fundamental Limits and Error Bounds.
Retrieved from the University of Minnesota Digital Conservancy,
Content distributed via the University of Minnesota's Digital Conservancy may be subject to additional license and use restrictions applied by the depositor.