Processing 3D point data is of primary interest in many areas of computer vision, including object grasping, robot navigation, and 3D object recognition. The recent introduction of cheap range sensors like the Microsoft Kinect has created a great interest in the computer vision community towards developing efficient algorithms for point cloud processing. Previously, in order to capture a point cloud, expensive specialized sensors, such as lasers or dedicated range imagers, were needed; now, range data is readily available from low-cost sensors which provide easily extractable point clouds from a depth map. From here, an interesting challenge is to find different objects in the point cloud. Various descriptors have been introduced to match features in a point cloud. Cheaper sensors are not necessarily designed to produce precise measurements, which entails that the data is not as accurate as a point cloud provided from a laser or a dedicated range finder. There have been feature descriptors that have been shown to be successful in recognizing objects from point clouds. The aim of this thesis is to introduce techniques from other domains, such as image processing, into the field of 3D point cloud processing in order to improve their rendering, recognition, and classification. Covariances have been proven to be very successful in image processing but other domains as well. This work is a first demonstration of the application of covariances in conjunction with 3D point cloud data.
University of Minnesota Ph.D. dissertation. August 2013. Major: Computer science. Advisor: Nikolaos Papanikolopoulos. 1 computer file (PDF); viii, 120 pages, appendix A.
Fehr, Duc Alexandre.
Covariance based point cloud descriptors for object detection and classification.
Retrieved from the University of Minnesota Digital Conservancy,
Content distributed via the University of Minnesota's Digital Conservancy may be subject to additional license and use restrictions applied by the depositor.