Maunu, Tyler2018-11-282018-11-282018-07https://hdl.handle.net/11299/201143University of Minnesota Ph.D. dissertation. July 2018. Major: Mathematics. Advisor: Gilad Lerman. 1 computer file (PDF); ix, 185 pages.This thesis consists of three works from my Ph.D.~research. The first part is an overview of the problem of Robust Subspace Recovery. Then, the next parts present two algorithms for solving this problem along with supporting mathematical analysis. Robust subspace recovery involves finding an underlying low-dimensional subspace in a dataset that is possibly corrupted with outliers. While this problem is easy to state, it has been difficult to develop optimal algorithms due to its underlying nonconvexity. We give a comprehensive review of the algorithms developed for RSR in the first chapter of this thesis. After this, we will discuss our proposed solutions to this problem. The first proposed algorithm, which we refer to as Fast Median Subspace (FMS), is designed to robustly determine the underlying subspace of such datasets, while having lower computational complexity than existing accurate methods. We prove convergence of the FMS iterates to a stationary point. Further, under two special models of data, FMS converges to a point that is near to the global minimum with overwhelming probability. Under these models, we show that the iteration complexity is globally sublinear and locally r-linear. For one of the models, these results hold for any fixed fraction of outliers (less than 1). Numerical experiments on synthetic and real data demonstrate its competitive speed and accuracy. Our second proposed algorithm involves geodesic gradient descent on the Grassmannian manifold. In the accompanying mathematical analysis, we prove that an underlying subspace is the only stationary point and local minimizer in a specified neighborhood if a deterministic condition holds for a dataset. We further show that if the deterministic condition is satisfied, the geodesic gradient descent method over the Grassmannian manifold can exactly recover the underlying subspace with proper initialization. Proper initialization by principal component analysis is guaranteed with a similar stability condition. Under slightly stronger assumptions, the gradient descent method with an adaptive step size scheme achieves linear convergence. The practicality of the deterministic condition is demonstrated on some statistical models of data, and the method achieves almost state-of-the-art recovery guarantees on the Haystack Model. We show that our gradient method can exactly recover the underlying subspace for any fixed fraction of outliers (less than 1) provided that the sample size is large enough.enbig datadimension reductionnonconvex optimizationrobust statisticsA Framework for Nonconvex Robust Subspace RecoveryThesis or Dissertation