Browsing by Subject "Sparse modeling"
Now showing 1 - 3 of 3
- Results Per Page
- Sort Options
Item Selected topics of high-dimensional sparse modeling(2013-11) Yi, FengIn this thesis we study three problems over high-dimensional sparse modeling. We first discuss the problem of high-dimensional covariance matrix estimation. Nowadays, massive high-dimensional data are more and more common in scientific investigations. Here we focus on one type of covariance matrices - bandable covariance matrices in which the dependence structure of variables follows a nature order. Many off-diagonal elements are very small, especially when they are far away from diagonal, which technically makes the covariance matrix very sparse. It has been shown that the tapering covariance estimator attains the optimal minimax rates of convergence for estimating large bandable covariance matrices. The estimation risk critically depends on the choice of tapering parameter. We develop a Stein's Unbiased Risk Estimation (SURE) theory for estimating the Frobenius risk of the tapering estimator. SURE tuning selects the minimizer of SURE curve as the chosen tapering parameter. Covariance matrix is finally estimated according to the selected tapering parameter in the tapering covariance estimator. The second part of the thesis is about high-dimensional varying-coefficient model. Varying-coefficient model is used when the effects of some variables depend on the values of other variables. One interesting and useful varying-coefficient model is that the coefficients of all variables are changing over time. Non-parametric method based on B-splines is used to estimate marginal coefficient of each variable, and varing-coefficient Independence Screening (VIS) is proposed to screen important variables. To improve the performance of the algorithm, Iterative VIS (IVIS) procedure is proposed. In the third part of the thesis, we study a high-dimensional extension of traditional factor analysis by relaxing the independence assumption of the error term. In the new model, we assume that the inverse covariance is sparse but not necessarily diagonal. We propose a generalized E-M algorithm to fit the extended factor analysis model. Our new model not only makes factor analysis more flexible, but also could be used to discover the hidden conditional structure of variables after common factors are discovered and removed.Item Sparse models for positive definite matrices(2015-02) Sivalingam, RavishankarSparse models have proven to be extremely successful in image processing, computer vision and machine learning. However, a majority of the effort has been focused on vector-valued signals. Higher-order signals like matrices are usually vectorized as a pre-processing step, and treated like vectors thereafter for sparse modeling. Symmetric positive definite (SPD) matrices arise in probability and statistics and the many domains built upon them. In computer vision, a certain type of feature descriptor called the region covariance descriptor, used to characterize an object or image region, belongs to this class of matrices. Region covariances are immensely popular in object detection, tracking, and classification. Human detection and recognition, texture classification, face recognition, and action recognition are some of the problems tackled using this powerful class of descriptors. They have also caught on as useful features for speech processing and recognition.Due to the popularity of sparse modeling in the vector domain, it is enticing to apply sparse representation techniques to SPD matrices as well. However, SPD matrices cannot be directly vectorized for sparse modeling, since their implicit structure is lost in the process, and the resulting vectors do not adhere to the positive definite manifold geometry. Therefore, to extend the benefits of sparse modeling to the space of positive definite matrices, we must develop dedicated sparse algorithms that respect the positive definite structure and the geometry of the manifold. The primary goal of this thesis is to develop sparse modeling techniques for symmetric positive definite matrices. First, we propose a novel sparse coding technique for representing SPD matrices using sparse linear combinations of a dictionary of atomic SPD matrices. Next, we present a dictionary learning approach wherein these atoms are themselves learned from the given data, in a task-driven manner. The sparse coding and dictionary learning approaches are then specialized to the case of rank-1 positive semi-definite matrices. A discriminative dictionary learning approach from vector sparse modeling is extended to the scenario of positive definite dictionaries. We present efficient algorithms and implementations, with practical applications in image processing and computer vision for the proposed techniques.Item Sparse representations for three-dimensional range data restoration(University of Minnesota. Institute for Mathematics and Its Applications, 2011-02) Mahmoudi, Mona; Sapiro, Guillermo