Browsing by Subject "Subspace Clustering"
Now showing 1 - 1 of 1
- Results Per Page
- Sort Options
Item Towards A Framework For Simultaneous Feature Tracking And Segmentation(2016-05) Poling, BryanThis is a collection of several works that I have done during my PhD research as a graduate student at the University of Minnesota. There are three parts, each focusing on a different topic in machine learning and computer vision. The common theme underlying these works is the tracking of feature points in motion video and their segmentation by object. Abstracts for each part are included below. Abstract for Part I: We present a novel approach to rigid-body motion segmentation from two views. We use a previously developed nonlinear embedding of two-view point correspondences into a 9-dimensional space and identify the different motions by segmenting lower-dimensional subspaces. In order to overcome mixed and unknown dimensions of subspaces and nonuniform distributions along them we suggest the novel concept of global dimension and its minimization for clustering subspaces with some theoretical motivation. We propose a fast projected gradient algorithm for minimizing global dimension and thus segmenting motions from 2-views. We develop an outlier detection framework around the proposed method, and we present state-of-the-art results on outlier-free and outlier-corrupted two-view data for segmenting motion. Abstract for Part II: In this part, we present a framework for jointly tracking a collection of features in motion video, which enables sharing information between the different features in the scene. Our method exploits the fact that trajectories of features from locally rigid and semi-rigid scenes are approximately confined to low-dimensional subspaces, and it uses this fact to aid the tracking of poor-quality and non-corner-like features by filling in missing information from other, better feature points. Our method significantly improves tracking performance in real-world, poorly lit scenes, does not require explicit modeling of the structure or motion of the scene, and runs in real time on a single CPU core. Abstract for Part III: In this part, we employ low-cost gyroscopes to improve general purpose feature tracking. We use some of the same ideas from Part II, except instead of borrowing information from other features to aid the tracking of poor-quality features, we rely on independent estimates of optical flow from external inertial sensors. Most related previous methods use gyroscopes to initialize and bound the search for features. In contrast, we use them to regularize the tracking energy function so that they can directly assist in the tracking of ambiguous and poor-quality features. We demonstrate that our simple technique offers significant improvements in performance over conventional template-based tracking methods, and is in fact competitive with more complex and computationally expensive state-of-the-art trackers, but at a fraction of the computational cost. Additionally, we show that the practice of initializing template-based feature trackers like KLT (Kanade-Lucas-Tomasi) using gyro-predicted optical flow offers no advantage over using a careful optical-only initialization method, suggesting that some deeper level of integration, like the method we propose, is needed in order to realize a genuine improvement in tracking performance from these inertial sensors.