This dissertation studies constrained matrix and tensor factorization problems which appear in the context of estimating factor analysis models used in signal processing and machine learning. Factor analysis dates back to the celebrated principal component analysis (PCA), which provides the optimal low rank approximation to matrix data. PCA is a key dimensionality reduction technique, however it cannot be used for estimating the underlying generative matrix factors, which are generally non-orthogonal. On the other hand, it has been observed that imposing simple constraints on the latent factors, for example non-negativity (which in a way opposes orthogonality) can make these factors essentially unique -- although theoretical understanding of this phenomenon was very limited prior to this dissertation. Moving from two-way to higher-way tensor data, the so-called canonical polyadic decomposition (CPD) model can provide essentially unique factors under mild conditions, but it is also widely accepted that if correct priors are imposed on the latent factors, the estimation performance can be greatly enhanced. One of the main contributions of this dissertation is the development of a simple sufficiently scattered condition that turns out to underpin a great variety of factor analysis models -- including but not limited to non-negative matrix factorization (NMF), and other models relying on volume minimization, which are nowadays pervasive in signal processing, machine learning, and geoscience and remote sensing -- where it helps resolve a popular conjecture in multispectral imaging known as Craig's belief, that sources can be blindly identified via ``minimum volume transformation'' under certain conditions. In these and other applications, the sufficiently scattered condition provides the best identifiability results known to this date, and it has significant potential for further applications. Besides theoretical results on the essential uniqueness for factor analysis models under non-negativity constraints, a general algorithmic framework is proposed, which seamlessly and effortlessly incorporates many common types of constraints imposed onto the latent factors. The new framework is a hybrid between alternating optimization (AO) and the alternating direction method of multipliers (ADMM): each matrix factor is updated in turn, using ADMM, hence the name AO-ADMM. This combination can naturally accommodate a great variety of constraints on the factor matrices, and almost all possible loss measures for the fitting. Computation caching and warm start strategies are used to ensure that each update is evaluated efficiently, while the outer AO framework exploits recent developments in block coordinate descent (BCD)-type methods which help ensure that every limit point is a stationary point, as well as faster and more robust convergence in practice. Extensive simulations and experiments with real data are used to showcase the effectiveness and broad applicability of the proposed framework. In addition to establishing essential uniqueness and providing effective computational algorithms for various matrix and tensor factorization models, we further study how well these models estimate the correct latent factors in noise. Towards this end, pertinent Cramer-Rao bounds (CRB) for constrained matrix and tensor factorization are derived. This turns out being a nontrivial task, mainly due to the presence of constraints, trivial ambiguities, and computational challenges as well. In particular, the Fisher Information Matrix (FIM) is always singular for the models considered (even without constraints) -- but modern CRB analysis shows that taking the pseudo-inverse still provides a valid (albeit potentially loose) bound. For big data analytics, however, the challenge is how to efficiently compute this pseudo-inverse. Towards this end we succeeded in identifying the null space of the FIM for several special cases, leading to very efficient algorithms for computing the CRB. Equipped with these results, we test-drive the performance of various algorithms and benchmark them against the CRB. Interestingly, our algorithms can approach this (optimistic) CRB under certain conditions.
University of Minnesota Ph.D. dissertation. August 2016. Major: Electrical Engineering. Advisor: Nicholas Sidiropoulos. 1 computer file (PDF); x, 142 pages.
Constrained Matrix and Tensor Factorization: Theory, Algorithms, and Applications.
Retrieved from the University of Minnesota Digital Conservancy,
Content distributed via the University of Minnesota's Digital Conservancy may be subject to additional license and use restrictions applied by the depositor.