Browsing by Author "Lee, Ji Eun"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Full-metric concurrent calibration for the development of CAT item banks.(2011-06) Lee, Ji EunThe development of an item bank to be used in a CAT frequently requires the administration of subsets of different test items to different groups of examinees, along with a set of common items, which requires linking to place parameters of a bank onto a common scale. Whereas previous linking research has mainly focused on methods of linking, the present study looked at the resulting metric of the linking process. Study 1 investigated the performance of full-metric concurrent calibration by varying the number of groups to be equated, the equivalence in θ among examinees, and the number of common items. The statistical characteristics of anchor items in full-metric calibration were examined in Study 2. Results indicated that as more groups were linked, full-metric concurrent calibration tended to have poorer recovery (i.e., larger bias, RMSE), especially for non-equivalent groups. The number of common items seems not important in full-metric concurrent calibration. Better linking was obtained when the average of anchor item discriminations was higher than that of the total test discrimination and the average of anchor item difficulties was the same as that of the total test difficulties. Larger standard deviation of anchor item discrimination and anchor item difficulty parameters, on the other hand, decreased parameter recovery in full-metric concurrent calibration. Limitations and implications of the current study are also discussed.Item Hypothesis Testing for Adaptive Measurement of Individual Change(2015-06) Lee, Ji EunThe significance of individual change has been an important topic in psychology and related fields. This study investigated performance of five hypothesis testing methods-Z, likelihood ratio, score test, and Kullback-Leibler divergence test with uniform and normal prior distributions -"and three item selection methods-Fisher information, Kullback-Leibler information and a modified Kullback-Leibler information-as an extension of Finkelman et al.'s (2010) methods to determine the significance of individual change in the context of adaptive measurement of change (AMC). Comparisons between methods were made based on observed Type I error rates and power. Monte Carlo simulation was conducted with the level of item discriminations, bank information shape, bank size, and test length varied. Overall, the Z statistic displayed a better balance of Type I error rates and power than the other four statistics under various conditions. The efficiency of variable-length AMC was evaluated compared to fixed-length AMC based on the number of items saved as well as the precision of decisions.