Tan, Pang-ningKumar, Vipin2020-09-022020-09-022000-06-05https://hdl.handle.net/11299/215423Association rules are valuable patterns because they offer useful insight into the types of dependencies that exist between attributes of a data set. Due to the completeness nature of algorithms for mining association-type patterns (such as Apriori), the number of patterns extracted are often very large. Therefore, there is a need to prune or rank the discovered patterns according to some measure of interestingness. In this paper, we will examine the various interestingness measures that arise from statistics, machine learning and data mining literature. We will investigate how close these measures reflect the statistical notion of correlation. We will show that support-based pruning is appropriate because it removes mostly uncorrelated and negatively correlated patterns. Another useful measure is the chi-square statistic, which is often used to test whether there is sufficient evidence in the data samples to reject the hypothesis that items in a pattern are independent. Our experimental results verified that many of the intuitive measures (such as Piatetsky-Shapiro's rule-interest, confidence, laplace, entropy gain, etc.) are very similar in nature to correlation coefficient (in the region of support values typically encountered in practice). Finally, we will introduce a new metric, called the IS measure, and show that it is highly linear with respect to correlation coefficient for many interesting association patterns.en-USInterestingness Measures for Association Patterns: A PerspectiveReport