Suen, King Yiu2023-02-162023-02-162022-12https://hdl.handle.net/11299/252505University of Minnesota Ph.D. dissertation. December 2022. Major: Psychology. Advisor: David Weiss. 1 computer file (PDF); ii, 83 pages.Computerized classification testing (CCT) aims to classify people into one of two or more possible categories while maximizing accuracy and minimizing test length. Two key components of CCT are the item selection method and the stopping rule. The current study used simulation to compare the performance of various item selection methods and stopping rules for multi-category CCT in terms of average test length (ATL) and percentage of correct classifications (PCC) under a wide variety of conditions. Item selection methods examined include selecting items to maximize the Fisher information at the ability estimate, Fisher information at the nearest cutoff, and the sum of Fisher information of all cutoffs weighted with the likelihood function. The stopping rules considered were a multi-hypothesis sequential probability ratio test (mSPRT) and a multi-category generalized likelihood ratio test (mGLR), combined with three variations of stochastic curtailment methods (SC-Standard, SC-MLE and SC-CI). Manipulated conditions included the number of cutoffs, the distribution of the examinees’ abilities, the width of the indifference region, the shape of the item bank information function, and whether the items were calibrated with estimation error. Results suggested that the combination of mGLR and SC-MLE consistently had the best balance of ATL and PCC. The three item selection methods performed similarly across all conditions.enComputerized Classification TestingEducational MeasurementItem Response TheoryPsychometricsA Comparison of Item Selection Methods and Stopping Rules in Multi-category Computerized Classification TestingThesis or Dissertation