Browsing by Subject "CAT"
Now showing 1 - 3 of 3
- Results Per Page
- Sort Options
Item Application of the bifactor model to computerized adaptive testing.(2011-01) Seo, Dong GiMost CAT has been studied under the framework of unidimensional IRT. However, many psychological variables are multidimensional and might benefit from using a multidimensional approach to CAT. In addition, a number of psychological variables (e.g., quality of life, depression) can be conceptualized as being consistent with a bifactor model (Holzinger & Swineford, 1937) in which there is a general dimension and some number of subdomains with each item loading on only one of those domains. The present study extended the work on the bifactor CAT of Weiss & Gibbons (2007) in comparison to a fully multidimensional bifactor method using multidimensional maximum likelihood estimation and Bayesian estimation for the bifactor model (MBICAT algorithm). Although Weiss and Gibbons applied the bifactor model to CAT (BICAT algorithm), their methods for item selection and scoring were based on unidimensional IRT methods. Therefore, this study investigated a fully multidimensional bifactor CAT algorithm using simulated data. The MBICAT algorithm was compared to the two BICAT algorithms under three different factors: the number of group factors, the group factor discrimination condition, and estimation method. A fixed- test length was used as the termination criterion for the CATs for Study 1. The accuracy of estimates using the BICAT algorithm and the MBICAT algorithm was evaluated with the correlation between true and estimated scores , the root mean square error (RMSE), and the observed standard error (OSE). Two termination criteria (OSE = .50 and .55) were used to investigate efficiency of the MBICAT for Study 2. This study demonstrated that the MBICAT algorithm worked well when latent scores on the secondary dimension were estimated properly. Although the MBICAT algorithm did not improve the accuracy and efficiency for the general factor scores compared to two BICAT algorithms, the MBICAT showed an improvement of the accuracy and efficiency for the group factors. In the two BICAT algorithms, the use of differential entry on the group factors did not make a difference compared to initial item at trait of 0 for both the general factor and group factor scales (Gibbons, et al., 2008) in terms of accuracy and efficiency.Item Differential item functioning in computerized adaptive testing: can CAT self-adjust enough?(2014-04) Piromsombat, ChayutTwo issues related to differential item functioning (DIF) in the context of computerized adaptive testing (CAT) were addressed in this study: 1) the effect of DIF in operational items on the accuracy of the ability estimate (θ ̂_"CAT" ) and 2) the accuracy of detecting DIF in pretest items when DIF occurred in operational items and examinees were matched on the number-correct score (NCS), the ability estimate obtained from nonadaptive computer-based testing (θ ̂_"CBT" ), and θ ̂_"CAT" . To investigate the first issue, a series of simulations were conducted by varying the level of DIF magnitude (0, .4, 1, and 1.6); DIF type (uniform and nonuniform); DIF contamination or the number of DIF items (6, 15, and 24 items out of the 30-item test); and DIF occurrence (first, middle, last, and across stages of CAT). For the latter issue, test impact (μ_R-μ_F = 0 and 1) and sample size ratio (NR:NF = 1:1 and 9:1 ) were also added to the simulation. It was found in the first simulation that CAT could adjust for the effect of DIF in operational items if DIF occurred in the early stages of CAT, with some restrictions though. Specifically, CAT successfully adjusted for the effect of DIF at the earlier stages if the number of DIF items and the magnitude of DIF were moderate. In other situations, CAT seemed to reduce the effect of DIF as seen in the trend of SEs which increased when DIF items were delivered and decreased after CAT administered a new DIF-free item. However, the self-adjustment of CAT was not enough to recover θ ̂_"CAT" from DIF effects. The results from another simulation suggested that matching examinees on θ ̂_"CAT" did not provide impressive advantages over the NCS and θ ̂_"CBT" in most of the simulation conditions. Overall, when operational items were contaminated with moderate DIF magnitude, the three matching variables yielded comparable results of DIF detection in pretest items. However, when the level of DIF contamination in operational items increased, matching examinees on θ ̂_"CAT" led to the worst situation of detecting DIF in pretest items, especially when large-uniform DIF items were used in the operational test. It was also evident that DIF in operational items, especially CAT items, led to false identification of DIF type. Specifically, pretest items exhibiting uniform DIF were mistakenly identified as nonuniform DIF if the matching variable was obtained from nonuniform-DIF operational items.Item On-The-Fly Parameter Estimation Based on Item Response Theory in Item-based Adaptive Learning Systems(2020-11) Jiang , ShengyuAn online learning system has the capacity to offer customized content that caters to individual learner’s need and has seen growing interest from industry and academia alike in recent years. Noting the similarity between online learning and the more established adaptive testing procedures, research has focused on applying the techniques of adaptive testing to the learning environment. Yet due to the inherent difference between learning and testing, there exist some major challenges that hinder the development of adaptive learning systems. To tackle these challenges, a new online learning system is proposed which features a Bayesian algorithm that computes item and person parameters on the fly. The new algorithm is validated in two separate simulation studies and the results show that the system, while being cost-effective to build and easy to implement, can also achieve adequate adaptivity and measurement precision for the individual learner.