Dhar, Sauptik2014-02-282014-02-282014-01https://hdl.handle.net/11299/162636University of Minnesota Ph.D. dissertation. January 2014. Major: Electrical Engineering. Advisor: Vladimir Cherkassky. 1 computer file (PDF); xiii, 140 pages.Many applications of machine learning involve sparse high-dimensional data, where the number of input features is larger than (or comparable to) the number of data samples. Predictive modeling of such data sets is very ill-posed and prone to overfitting. Standard inductive learning methods may not be sufficient for sparse high-dimensional data, and this provides motivation for non-standard learning settings. This thesis investigates such a new learning methodology called Learning through Contradictions or Universum Learning proposed by Vapnik (1998, 2006) for binary classification. This method incorporates a priori knowledge about application data, in the form of additional Universum samples, into the learning process. However, such a new methodology is still not well-understood and represents a challenge to end users. An overall goal of this thesis is to improve understanding of this new Universum learning methodology and to improve its usability for general users. Specific objectives of this thesis include:Development of practical conditions for the effectiveness of Universum Learning for binary classification.Extension of Universum Learning to real life classification settings with different misclassification costs and unbalanced data.Extension of Universum Learning to single-class learning problems.Extension of Universum Learning to regression problems.The outcome of this research will result in better understanding and adoption of the Universum Learning methods for classification, single class learning and regression problems, common in many real life applications.en-USBinary ClassificationMachine LearningSingle-Class LearningSupport Vector MachineUniversum LearningAnalysis and extensions of Universum learningThesis or Dissertation