Browsing by Subject "Compressed sensing"
Now showing 1 - 4 of 4
- Results Per Page
- Sort Options
Item Bounds of restricted isometry constants in extreme asymptotics: formulae for Gaussian matrices(University of Minnesota. Institute for Mathematics and Its Applications, 2011-12) Bah, Bubacarr; Tanner, JaredItem Learning in Human and Robot Search: Subgoal, Submodularity, and Sparsity(2016-08) Tseng, Kuo-ShihSearch is an essential technology for various robotic applications and it is also central to human daily activities. Searching for targets efficiently consists of NP-hard problems, but young children can search for targets very efficiently . How humans search for targets efficiently is still a mystery. Hence, two central questions are as follows: First, how do humans search for targets efficiently ? Second, how can robots learn to search like humans? The central goal of this dissertation is to explore these two questions, analyzing and modeling human search behavior and then applying the findings to robot search. This thesis is organized in three parts--human search analysis, problem reformulation, and learning in robot search. In the first part, the goal is to analyze and model human search behavior. In the experiments, human subjects remotely control a mobile robot to search for a target through teleoperation. Their search data are collected for further analysis. A novel framework consisting structure learning and K-means clustering is proposed to model and analyze humans search behavior. The analysis of the experimental data demonstrates that (1) humans are able to solve the complex search task by breaking it up into smaller tasks (so called subgoal hypothesis); and (2) humans consider both coverage and motion cost while searching. In the second part, the goal is to derive an objective function considering coverage and motion cost. The objective of search is to maximize the probability of target detection while covering most of the environment in minimum time. The robot needs to consider three objectives: (1) maximal coverage; (2) maximal probability of detection; and (3) minimum-time trajectory planning. However, all these objectives are NP-hard. Since two of these objective functions are submodular, the search problem can be reformulated as maximizing the cumulative probability of detection (PD) with motion cost. The proof shows that greedy algorithms give near-optimal subgoals with high probability. In the last part, the approach is to learn to search through sequential perception and actions. Since the PD function and cost-to-go (CTG) function depend on the environment, the robot needs to learn both the PD function and CTG function. To make the learning process of the PD function tractable, a technique based on Spatial Fourier Sparse Set (SFSS) is proposed that takes advantage of the structure in the relationship between the robot's spatial conguration and its environmental sensing. Q-learning is proposed to learn the CTG function. The learned PD and CTG functions can then be used to generate subgoals for search tasks. Based on compressed sensing theory, the proposed SFSS is able to learn a submodular function with m samples, where m <= O(klog(b)), b is the Fourier basis number and k is the number of nonzero Fourier coecients. The proposed algorithms generate near-optimal subgoals for both human and robot search experiments. Human experiments showed that the human search performance is improved with the assistance of subgoals. Robot experiments demonstrated that the robot can search for the target faster than any existing approaches. To the best of our knowledge, this is the first work that theorems and experiments prove that humans cannot outperform robots since the objective functions of search problems are submodular.Item Learning to sense sparse signals: simultaneous sensing matrix and sparsifying dictionary optimization(University of Minnesota. Institute for Mathematics and Its Applications, 2008-05) Duarte-Carvajalino, Julio M.; Sapiro, GuillermoItem Sparsity control for robustness and social data analysis.(2012-05) Mateos Buckstein, GonzaloThe information explosion propelled by the advent of personal computers, the Internet, and the global-scale communications has rendered statistical learning from data increasingly important for analysis and processing. The ability to mine valuable information from unprecedented volumes of data will facilitate preventing or limiting the spread of epidemics and diseases, identifying trends in global financial markets, protecting critical infrastructure including the smart grid, and understanding the social and behavioral dynamics of emergent social-computational systems. Along with data that adhere to postulated models, present in large volumes of data are also those that do not – the so-termed outliers. This thesis contributes in several issues that pertain to resilience against outliers, a fundamental aspect of statistical inference tasks such as estimation, model selection, prediction, classification, tracking, and dimensionality reduction, to name a few. The recent upsurge of research toward compressive sampling and parsimonious signal representations hinges on signals being sparse, either naturally, or, after projecting them on a proper basis. The present thesis introduces a neat link between sparsity and robustness against outliers, even when the signals involved are not sparse. It is argued that controlling sparsity of model residuals leads to statistical learning algorithms that are computationally affordable and universally robust to outlier models. Even though focus is placed first on robustifying linear regression, the universality of the developed framework is highlighted through diverse generalizations that pertain to: i) the information used for selecting the sparsity-controlling parameters; ii) the nominal data model; and iii) the criterion adopted to fit the chosen model. Explored application domains include preference measurement for consumer utility function estimation in marketing, and load curve cleansing – a critical task in power systems engineering and management. Finally, robust principal component analysis (PCA) algorithms are developed to extract the most informative low-dimensional structure, from (grossly corrupted) high-dimensional data. Beyond its ties to robust statistics, the developed outlier-aware PCA framework is versatile to accommodate novel and scalable algorithms to: i) track the low-rank signal subspace as new data are acquired in real time; and ii) determine principal components robustly in (possibly) infinite-dimensional feature spaces. Synthetic and real data tests corroborate the effectiveness of the proposed robust PCA schemes, when used to identify aberrant responses in personality assessment surveys, as well as unveil communities in social networks, and intruders from video surveillance data.