Menzies, TimTurhan, BurakGay, GregoryBener, AyseCukic, BojanJiang, Yue2020-12-102020-12-102008PROMISE '08 Proceedings of the 4th international workshop on Predictor models in software engineeringhttps://hdl.handle.net/11299/217361Associated research group: Critical Systems Research GroupContext: There are many methods that input static code features and output a predictor for faulty code modules. These data mining methods have hit a "performance ceiling"; i.e., some inherent upper bound on the amount of information offered by, say, static code features when identifying modules which contain faults. Objective: We seek an explanation for this ceiling effect. Perhaps static code features have "limited information content"; i.e. their information can be quickly and completely discovered by even simple learners. Method:An initial literature review documents the ceiling effect in other work. Next, using three sub-sampling techniques (under-, over-, and micro-sampling), we look for the lower useful bound on the number of training instances. Results: Using micro-sampling, we find that as few as 50 instances yield as much information as larger training sets. Conclusions: We have found much evidence for the limited information hypothesis. Further progress in learning defect predictors may not come from better algorithms. Rather, we need to be improving the information content of the training data, perhaps with case-based reasoning methods.Implications of Ceiling Effects in Defect PredictorsReport