Agarwal, RameshJoshi, Mahesh2020-09-022020-09-022000-03-02https://hdl.handle.net/11299/215404We have developed a new solution framework for the multi-class classification problem in data mining. The method is especially applicable in situations where different classes have widely different distributions in training data. We applied the technique to the Network Intrusion Detection Problem (KDD-CUP'99).Our framework is based on a new rule-based classifier model for each target class. The proposed model consists of positive rules (P-rules) that predict presence of the class, and negative rules (N-rules) that predict absence of the class. The model is learned in two phases. The first phase discovers a few P-rules that capture most of the positive cases for the target class while keeping the false positive rate at a reasonable level. The goal of the second phase is to discover a few N-rules that remove most of the false positives introduced by the union of all P-rules while keeping the detection rate above an acceptable level. The sets of P- and N-rules are ranked according to certain statistical measures. We gather some statistics for P- and N-rules using the training data, and develop a mechanism to assign a score to each decision made by the classifier. This process is repeated for each target class. We use the misclassification cost matrix to consolidate the scores from all binary classifiers in arriving at the final decision. In this paper, we describe the details of this proposed framework.A real-life network intrusion-detection dataset was supplied as part of the KDD-CUP'99 contest. This dataset of 5 million training records has a very highly skewed distribution of classes (largest class has 80% of the records, while the smallest has only 0.001% records). We describe how we applied our framework to this problem. As an aside, we also describe the controversy that we triggered after the contest and how we proved the original test data labels to be wrong. We compare the results of our approach with 23 other contestants. For the subset of test data consisting of known subclass labels, our technique achieves the best performance of all in terms of accuracy as well as misclassification cost penalty.en-USPNrule: A New Framework for Learning Classifier Models in Data Mining (A Cast-Study in Network Intrusion Detection)Report